Search Results: "ah"

14 November 2023

John Goerzen: It s More Important To Recognize What Direction People Are Moving Than Where They Are

I recently read a post on social media that went something like this (paraphrased): If you buy an EV, you re part of the problem. You re advancing car culture and are actively hurting the planet. The only ethical thing to do is ditch your cars and put all your effort into supporting transit. Anything else is worthless. There is some truth there; supporting transit in areas it makes sense is better than having more cars, even EVs. But of course the key here is in areas it makes sense. My road isn t even paved. I live miles from the nearest town. And get into the remote regions of the western USA and you ll find people that live 40 miles from the nearest neighbor. There s no realistic way that mass transit is ever going to be a thing in these areas. And even if it were somehow usable, sending buses over miles where nobody lives just to reach the few that are there will be worse than private EVs. And because I can hear this argument coming a mile away, no, it doesn t make sense to tell these people to just not live in the country because the planet won t support that anymore, because those people are literally the ones that feed the ones that live in the cities. The funny thing is: the person that wrote that shares my concerns and my goals. We both care deeply about climate change. We both want positive change. And I, ahem, recently bought an EV. I have seen this play out in so many ways over the last few years. Drive a car? Get yelled at. Support the wrong politician? Get a shunning. Not speak up loudly enough about the right politician? That s a yellin too. The problem is, this doesn t make friends. In fact, it hurts the cause. It doesn t recognize this truth:
It is more important to recognize what direction people are moving than where they are.
I support trains and transit. I ve donated money and written letters to politicians. But, realistically, there will never be transit here. People in my county are unable to move all the way to transit. But what can we do? Plenty. We bought an EV. I ve been writing letters to the board of our local electrical co-op advocating for relaxation of rules around residential solar installations, and am planning one myself. It may well be that our solar-powered transportation winds up having a lower carbon footprint than the poster s transit use. Pick your favorite cause. Whatever it is, consider your strategy: What do you do with someone that is very far away from you, but has taken the first step to move an inch in your direction? Do you yell at them for not being there instantly? Or do you celebrate that they have changed and are moving?

13 November 2023

Freexian Collaborators: Monthly report about Debian Long Term Support, October 2023 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In October, 18 contributors have been paid to work on Debian LTS, their reports are available:
  • Adrian Bunk did 8.0h (out of 7.75h assigned and 10.0h from previous period), thus carrying over 9.75h to the next month.
  • Anton Gladky did 9.5h (out of 9.5h assigned and 5.5h from previous period), thus carrying over 5.5h to the next month.
  • Bastien Roucari s did 16.0h (out of 16.75h assigned and 1.0h from previous period), thus carrying over 1.75h to the next month.
  • Ben Hutchings did 8.0h (out of 17.75h assigned), thus carrying over 9.75h to the next month.
  • Chris Lamb did 17.0h (out of 17.75h assigned), thus carrying over 0.75h to the next month.
  • Emilio Pozuelo Monfort did 17.5h (out of 17.75h assigned), thus carrying over 0.25h to the next month.
  • Guilhem Moulin did 9.75h (out of 17.75h assigned), thus carrying over 8.0h to the next month.
  • Helmut Grohne did 1.5h (out of 10.0h assigned), thus carrying over 8.5h to the next month.
  • Lee Garrett did 10.75h (out of 17.75h assigned), thus carrying over 7.0h to the next month.
  • Markus Koschany did 30.0h (out of 30.0h assigned).
  • Ola Lundqvist did 4.0h (out of 0h assigned and 19.5h from previous period), thus carrying over 15.5h to the next month.
  • Roberto C. S nchez did 12.0h (out of 5.0h assigned and 7.0h from previous period).
  • Santiago Ruano Rinc n did 13.625h (out of 7.75h assigned and 8.25h from previous period), thus carrying over 2.375h to the next month.
  • Sean Whitton did 13.0h (out of 6.0h assigned and 7.0h from previous period).
  • Sylvain Beucler did 7.5h (out of 11.25h assigned and 6.5h from previous period), thus carrying over 10.25h to the next month.
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).
  • Tobias Frost did 16.0h (out of 9.25h assigned and 6.75h from previous period).
  • Utkarsh Gupta did 0.0h (out of 0.75h assigned and 17.0h from previous period), thus carrying over 17.75h to the next month.

Evolution of the situation In October, we have released 49 DLAs. Of particular note in the month of October, LTS contributor Chris Lamb issued DLA 3627-1 pertaining to Redis, the popular key-value database similar to Memcached, which was vulnerable to an authentication bypass vulnerability. Fixing this vulnerability involved dealing with a race condition that could allow another process an opportunity to establish an otherwise unauthorized connection. LTS contributor Markus Koschany was involved in the mitigation of CVE-2023-44487, which is a protocol-level vulnerability in the HTTP/2 protocol. The impacts within Debian involved multiple packages, across multiple releases, with multiple advisories being released (both DSA for stable and old-stable, and DLA for LTS). Markus reviewed patches and security updates prepared by other Debian developers, investigated reported regressions, provided patches for the aforementioned regressions, and issued several security updates as part of this. Additionally, as MariaDB 10.3 (the version originally included with Debian buster) passed end-of-life earlier this year, LTS contributor Emilio Pozuelo Monfort has begun investigating the feasibility of backporting MariaDB 10.11. The work is in early stages, with much testing and analysis remaining before a final decision can be made, as this only one of several available potential courses of action concerning MariaDB. Finally, LTS contributor Lee Garrett has invested considerable effort into the development the Functional Test Framework here. While so far only an initial version has been published, it already has several features which we intend to begin leveraging for testing of LTS packages. In particular, the FTF supports provisioning multiple VMs for the purposes of performing functional tests of network-facing services (e.g., file services, authentication, etc.). These tests are in addition to the various unit-level tests which are executed during package build time. Development work will continue on FTF and as it matures and begins to see wider use within LTS we expect to improve the quality of the updates we publish.

Thanks to our sponsors Sponsors that joined recently are in bold.

11 November 2023

Reproducible Builds: Reproducible Builds in October 2023

Welcome to the October 2023 report from the Reproducible Builds project. In these reports we outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries.

Reproducible Builds Summit 2023 Between October 31st and November 2nd, we held our seventh Reproducible Builds Summit in Hamburg, Germany! Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort, and this instance was no different. During this enriching event, participants had the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. A number of concrete outcomes from the summit will documented in the report for November 2023 and elsewhere. Amazingly the agenda and all notes from all sessions are already online. The Reproducible Builds team would like to thank our event sponsors who include Mullvad VPN, openSUSE, Debian, Software Freedom Conservancy, Allotropia and Aspiration Tech.

Reflections on Reflections on Trusting Trust Russ Cox posted a fascinating article on his blog prompted by the fortieth anniversary of Ken Thompson s award-winning paper, Reflections on Trusting Trust:
[ ] In March 2023, Ken gave the closing keynote [and] during the Q&A session, someone jokingly asked about the Turing award lecture, specifically can you tell us right now whether you have a backdoor into every copy of gcc and Linux still today?
Although Ken reveals (or at least claims!) that he has no such backdoor, he does admit that he has the actual code which Russ requests and subsequently dissects in great but accessible detail.

Ecosystem factors of reproducible builds Rahul Bajaj, Eduardo Fernandes, Bram Adams and Ahmed E. Hassan from the Maintenance, Construction and Intelligence of Software (MCIS) laboratory within the School of Computing, Queen s University in Ontario, Canada have published a paper on the Time to fix, causes and correlation with external ecosystem factors of unreproducible builds. The authors compare various response times within the Debian and Arch Linux distributions including, for example:
Arch Linux packages become reproducible a median of 30 days quicker when compared to Debian packages, while Debian packages remain reproducible for a median of 68 days longer once fixed.
A full PDF of their paper is available online, as are many other interesting papers on MCIS publication page.

NixOS installation image reproducible On the NixOS Discourse instance, Arnout Engelen (raboof) announced that NixOS have created an independent, bit-for-bit identical rebuilding of the nixos-minimal image that is used to install NixOS. In their post, Arnout details what exactly can be reproduced, and even includes some of the history of this endeavour:
You may remember a 2021 announcement that the minimal ISO was 100% reproducible. While back then we successfully tested that all packages that were needed to build the ISO were individually reproducible, actually rebuilding the ISO still introduced differences. This was due to some remaining problems in the hydra cache and the way the ISO was created. By the time we fixed those, regressions had popped up (notably an upstream problem in Python 3.10), and it isn t until this week that we were back to having everything reproducible and being able to validate the complete chain.
Congratulations to NixOS team for reaching this important milestone! Discussion about this announcement can be found underneath the post itself, as well as on Hacker News.

CPython source tarballs now reproducible Seth Larson published a blog post investigating the reproducibility of the CPython source tarballs. Using diffoscope, reprotest and other tools, Seth documents his work that led to a pull request to make these files reproducible which was merged by ukasz Langa.

New arm64 hardware from Codethink Long-time sponsor of the project, Codethink, have generously replaced our old Moonshot-Slides , which they have generously hosted since 2016 with new KVM-based arm64 hardware. Holger Levsen integrated these new nodes to the Reproducible Builds continuous integration framework.

Community updates On our mailing list during October 2023 there were a number of threads, including:
  • Vagrant Cascadian continued a thread about the implementation details of a snapshot archive server required for reproducing previous builds. [ ]
  • Akihiro Suda shared an update on BuildKit, a toolkit for building Docker container images. Akihiro links to a interesting talk they recently gave at DockerCon titled Reproducible builds with BuildKit for software supply-chain security.
  • Alex Zakharov started a thread discussing and proposing fixes for various tools that create ext4 filesystem images. [ ]
Elsewhere, Pol Dellaiera made a number of improvements to our website, including fixing typos and links [ ][ ], adding a NixOS Flake file [ ] and sorting our publications page by date [ ]. Vagrant Cascadian presented Reproducible Builds All The Way Down at the Open Source Firmware Conference.

Distribution work distro-info is a Debian-oriented tool that can provide information about Debian (and Ubuntu) distributions such as their codenames (eg. bookworm) and so on. This month, Benjamin Drung uploaded a new version of distro-info that added support for the SOURCE_DATE_EPOCH environment variable in order to close bug #1034422. In addition, 8 reviews of packages were added, 74 were updated and 56 were removed this month, all adding to our knowledge about identified issues. Bernhard M. Wiedemann published another monthly report about reproducibility within openSUSE.

Software development The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including: In addition, Chris Lamb fixed an issue in diffoscope, where if the equivalent of file -i returns text/plain, fallback to comparing as a text file. This was originally filed as Debian bug #1053668) by Niels Thykier. [ ] This was then uploaded to Debian (and elsewhere) as version 251.

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In October, a number of changes were made by Holger Levsen:
  • Debian-related changes:
    • Refine the handling of package blacklisting, such as sending blacklisting notifications to the #debian-reproducible-changes IRC channel. [ ][ ][ ]
    • Install systemd-oomd on all Debian bookworm nodes (re. Debian bug #1052257). [ ]
    • Detect more cases of failures to delete schroots. [ ]
    • Document various bugs in bookworm which are (currently) being manually worked around. [ ]
  • Node-related changes:
    • Integrate the new arm64 machines from Codethink. [ ][ ][ ][ ][ ][ ]
    • Improve various node cleanup routines. [ ][ ][ ][ ]
    • General node maintenance. [ ][ ][ ][ ]
  • Monitoring-related changes:
    • Remove unused Munin monitoring plugins. [ ]
    • Complain less visibly about too many installed kernels. [ ]
  • Misc:
    • Enhance the firewall handling on Jenkins nodes. [ ][ ][ ][ ]
    • Install the fish shell everywhere. [ ]
In addition, Vagrant Cascadian added some packages and configuration for snapshot experiments. [ ]

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

10 November 2023

Jonathan Dowland: Plato document reader

Kobo Libra 2 Kobo Libra 2
text-handling in Plato text-handling in Plato
Until now, I haven't hacked my Kobo Libra 2 ereader, despite knowing it is a relatively open device. The default document reader (Nickel) does everything I need it to. Syncing the books via USB is tedious, but I don't do it that often. Via Videah's blog post My E-Reader Setup, I learned of Plato, an alternative document reader. Plato doesn't really offer any headline features that I need, but it cost me nothing to try it out, so I installed it (fairly painlessly) and launched it just once. The library view seems good, although I've not used it much: I picked a book and read it through1, and I'm 60% through another2. I tend to read one ebook at a time. The main reader interface is great: Just the text3. Page transitions are really, really fast. Tweaking the backlight intensity is a little slower than Nickel: menu-driven rather than an active scroll region (which is convenient in Nickel but easy to accidentally turn to 0% and hard to recover from in pitch black). Now that I've started down the road of hacking the Kobo, I think I will explore wifi-syncing the library, perhaps using a variation on the hook scripts shared in Videah's blog post.

  1. Venomous Lumpsucker by Ned Beauman. It's fantastic. Guardian review
  2. There Is No Antimemetics Division by qntm
  3. I do miss Nickel's tiny progress bar somewhat: the only non-text bit of UX I left turned on.

7 November 2023

Melissa Wen: AMD Driver-specific Properties for Color Management on Linux (Part 2)

TL;DR: This blog post explores the color capabilities of AMD hardware and how they are exposed to userspace through driver-specific properties. It discusses the different color blocks in the AMD Display Core Next (DCN) pipeline and their capabilities, such as predefined transfer functions, 1D and 3D lookup tables (LUTs), and color transformation matrices (CTMs). It also highlights the differences in AMD HW blocks for pre and post-blending adjustments, and how these differences are reflected in the available driver-specific properties. Overall, this blog post provides a comprehensive overview of the color capabilities of AMD hardware and how they can be controlled by userspace applications through driver-specific properties. This information is valuable for anyone who wants to develop applications that can take advantage of the AMD color management pipeline. Get a closer look at each hardware block s capabilities, unlock a wealth of knowledge about AMD display hardware, and enhance your understanding of graphics and visual computing. Stay tuned for future developments as we embark on a quest for GPU color capabilities in the ever-evolving realm of rainbow treasures.
Operating Systems can use the power of GPUs to ensure consistent color reproduction across graphics devices. We can use GPU-accelerated color management to manage the diversity of color profiles, do color transformations to convert between High-Dynamic-Range (HDR) and Standard-Dynamic-Range (SDR) content and color enhacements for wide color gamut (WCG). However, to make use of GPU display capabilities, we need an interface between userspace and the kernel display drivers that is currently absent in the Linux/DRM KMS API. In the previous blog post I presented how we are expanding the Linux/DRM color management API to expose specific properties of AMD hardware. Now, I ll guide you to the color features for the Linux/AMD display driver. We embark on a journey through DRM/KMS, AMD Display Manager, and AMD Display Core and delve into the color blocks to uncover the secrets of color manipulation within AMD hardware. Here we ll talk less about the color tools and more about where to find them in the hardware. We resort to driver-specific properties to reach AMD hardware blocks with color capabilities. These blocks display features like predefined transfer functions, color transformation matrices, and 1-dimensional (1D LUT) and 3-dimensional lookup tables (3D LUT). Here, we will understand how these color features are strategically placed into color blocks both before and after blending in Display Pipe and Plane (DPP) and Multiple Pipe/Plane Combined (MPC) blocks. That said, welcome back to the second part of our thrilling journey through AMD s color management realm!

AMD Display Driver in the Linux/DRM Subsystem: The Journey In my 2022 XDC talk I m not an AMD expert, but , I briefly explained the organizational structure of the Linux/AMD display driver where the driver code is bifurcated into a Linux-specific section and a shared-code portion. To reveal AMD s color secrets through the Linux kernel DRM API, our journey led us through these layers of the Linux/AMD display driver s software stack. It includes traversing the DRM/KMS framework, the AMD Display Manager (DM), and the AMD Display Core (DC) [1]. The DRM/KMS framework provides the atomic API for color management through KMS properties represented by struct drm_property. We extended the color management interface exposed to userspace by leveraging existing resources and connecting them with driver-specific functions for managing modeset properties. On the AMD DC layer, the interface with hardware color blocks is established. The AMD DC layer contains OS-agnostic components that are shared across different platforms, making it an invaluable resource. This layer already implements hardware programming and resource management, simplifying the external developer s task. While examining the DC code, we gain insights into the color pipeline and capabilities, even without direct access to specifications. Additionally, AMD developers provide essential support by answering queries and reviewing our work upstream. The primary challenge involved identifying and understanding relevant AMD DC code to configure each color block in the color pipeline. However, the ultimate goal was to bridge the DC color capabilities with the DRM API. For this, we changed the AMD DM, the OS-dependent layer connecting the DC interface to the DRM/KMS framework. We defined and managed driver-specific color properties, facilitated the transport of user space data to the DC, and translated DRM features and settings to the DC interface. Considerations were also made for differences in the color pipeline based on hardware capabilities.

Exploring Color Capabilities of the AMD display hardware Now, let s dive into the exciting realm of AMD color capabilities, where a abundance of techniques and tools await to make your colors look extraordinary across diverse devices. First, we need to know a little about the color transformation and calibration tools and techniques that you can find in different blocks of the AMD hardware. I borrowed some images from [2] [3] [4] to help you understand the information.

Predefined Transfer Functions (Named Fixed Curves): Transfer functions serve as the bridge between the digital and visual worlds, defining the mathematical relationship between digital color values and linear scene/display values and ensuring consistent color reproduction across different devices and media. You can learn more about curves in the chapter GPU Gems 3 - The Importance of Being Linear by Larry Gritz and Eugene d Eon. ITU-R 2100 introduces three main types of transfer functions:
  • OETF: the opto-electronic transfer function, which converts linear scene light into the video signal, typically within a camera.
  • EOTF: electro-optical transfer function, which converts the video signal into the linear light output of the display.
  • OOTF: opto-optical transfer function, which has the role of applying the rendering intent .
AMD s display driver supports the following pre-defined transfer functions (aka named fixed curves):
  • Linear/Unity: linear/identity relationship between pixel value and luminance value;
  • Gamma 2.2, Gamma 2.4, Gamma 2.6: pure power functions;
  • sRGB: 2.4: The piece-wise transfer function from IEC 61966-2-1:1999;
  • BT.709: has a linear segment in the bottom part and then a power function with a 0.45 (~1/2.22) gamma for the rest of the range; standardized by ITU-R BT.709-6;
  • PQ (Perceptual Quantizer): used for HDR display, allows luminance range capability of 0 to 10,000 nits; standardized by SMPTE ST 2084.
These capabilities vary depending on the hardware block, with some utilizing hardcoded curves and others relying on AMD s color module to construct curves from standardized coefficients. It also supports user/custom curves built from a lookup table.

1D LUTs (1-dimensional Lookup Table): A 1D LUT is a versatile tool, defining a one-dimensional color transformation based on a single parameter. It s very well explained by Jeremy Selan at GPU Gems 2 - Chapter 24 Using Lookup Tables to Accelerate Color Transformations It enables adjustments to color, brightness, and contrast, making it ideal for fine-tuning. In the Linux AMD display driver, the atomic API offers a 1D LUT with 4096 entries and 8-bit depth, while legacy gamma uses a size of 256.

3D LUTs (3-dimensional Lookup Table): These tables work in three dimensions red, green, and blue. They re perfect for complex color transformations and adjustments between color channels. It s also more complex to manage and require more computational resources. Jeremy also explains 3D LUT at GPU Gems 2 - Chapter 24 Using Lookup Tables to Accelerate Color Transformations

CTM (Color Transformation Matrices): Color transformation matrices facilitate the transition between different color spaces, playing a crucial role in color space conversion.

HDR Multiplier: HDR multiplier is a factor applied to the color values of an image to increase their overall brightness.

AMD Color Capabilities in the Hardware Pipeline First, let s take a closer look at the AMD Display Core Next hardware pipeline in the Linux kernel documentation for AMDGPU driver - Display Core Next In the AMD Display Core Next hardware pipeline, we encounter two hardware blocks with color capabilities: the Display Pipe and Plane (DPP) and the Multiple Pipe/Plane Combined (MPC). The DPP handles color adjustments per plane before blending, while the MPC engages in post-blending color adjustments. In short, we expect DPP color capabilities to match up with DRM plane properties, and MPC color capabilities to play nice with DRM CRTC properties. Note: here s the catch there are some DRM CRTC color transformations that don t have a corresponding AMD MPC color block, and vice versa. It s like a puzzle, and we re here to solve it!

AMD Color Blocks and Capabilities We can finally talk about the color capabilities of each AMD color block. As it varies based on the generation of hardware, let s take the DCN3+ family as reference. What s possible to do before and after blending depends on hardware capabilities describe in the kernel driver by struct dpp_color_caps and struct mpc_color_caps. The AMD Steam Deck hardware provides a tangible example of these capabilities. Therefore, we take SteamDeck/DCN301 driver as an example and look at the Color pipeline capabilities described in the file: driver/gpu/drm/amd/display/dcn301/dcn301_resources.c
/* Color pipeline capabilities */
dc->caps.color.dpp.dcn_arch = 1; // If it is a Display Core Next (DCN): yes. Zero means DCE.
dc->caps.color.dpp.input_lut_shared = 0;
dc->caps.color.dpp.icsc = 1; // Intput Color Space Conversion  (CSC) matrix.
dc->caps.color.dpp.dgam_ram = 0; // The old degamma block for degamma curve (hardcoded and LUT).  Gamma correction  is the new one.
dc->caps.color.dpp.dgam_rom_caps.srgb = 1; // sRGB hardcoded curve support
dc->caps.color.dpp.dgam_rom_caps.bt2020 = 1; // BT2020 hardcoded curve support (seems not actually in use)
dc->caps.color.dpp.dgam_rom_caps.gamma2_2 = 1; // Gamma 2.2 hardcoded curve support
dc->caps.color.dpp.dgam_rom_caps.pq = 1; // PQ hardcoded curve support
dc->caps.color.dpp.dgam_rom_caps.hlg = 1; // HLG hardcoded curve support
dc->caps.color.dpp.post_csc = 1; // CSC matrix
dc->caps.color.dpp.gamma_corr = 1; // New  Gamma Correction  block for degamma user LUT;
dc->caps.color.dpp.dgam_rom_for_yuv = 0;
dc->caps.color.dpp.hw_3d_lut = 1; // 3D LUT support. If so, it's always preceded by a shaper curve. 
dc->caps.color.dpp.ogam_ram = 1; //  Blend Gamma  block for custom curve just after blending
// no OGAM ROM on DCN301
dc->caps.color.dpp.ogam_rom_caps.srgb = 0;
dc->caps.color.dpp.ogam_rom_caps.bt2020 = 0;
dc->caps.color.dpp.ogam_rom_caps.gamma2_2 = 0;
dc->caps.color.dpp.ogam_rom_caps.pq = 0;
dc->caps.color.dpp.ogam_rom_caps.hlg = 0;
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1; // Post-blending CTM (pre-blending CTM is always supported)
dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; // Post-blending 3D LUT (preceded by shaper curve)
dc->caps.color.mpc.ogam_ram = 1; // Post-blending regamma.
// No pre-defined TF supported for regamma.
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
dc->caps.color.mpc.ogam_rom_caps.gamma2_2 = 0;
dc->caps.color.mpc.ogam_rom_caps.pq = 0;
dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
dc->caps.color.mpc.ocsc = 1; // Output CSC matrix.
I included some inline comments in each element of the color caps to quickly describe them, but you can find the same information in the Linux kernel documentation. See more in struct dpp_color_caps, struct mpc_color_caps and struct rom_curve_caps. Now, using this guideline, we go through color capabilities of DPP and MPC blocks and talk more about mapping driver-specific properties to corresponding color blocks.

DPP Color Pipeline: Before Blending (Per Plane) Let s explore the capabilities of DPP blocks and what you can achieve with a color block. The very first thing to pay attention is the display architecture of the display hardware: previously AMD uses a display architecture called DCE
  • Display and Compositing Engine, but newer hardware follows DCN - Display Core Next.
The architectute is described by: dc->caps.color.dpp.dcn_arch

AMD Plane Degamma: TF and 1D LUT Described by: dc->caps.color.dpp.dgam_ram, dc->caps.color.dpp.dgam_rom_caps,dc->caps.color.dpp.gamma_corr AMD Plane Degamma data is mapped to the initial stage of the DPP pipeline. It is utilized to transition from scanout/encoded values to linear values for arithmetic operations. Plane Degamma supports both pre-defined transfer functions and 1D LUTs, depending on the hardware generation. DCN2 and older families handle both types of curve in the Degamma RAM block (dc->caps.color.dpp.dgam_ram); DCN3+ separate hardcoded curves and 1D LUT into two block: Degamma ROM (dc->caps.color.dpp.dgam_rom_caps) and Gamma correction block (dc->caps.color.dpp.gamma_corr), respectively. Pre-defined transfer functions:
  • they are hardcoded curves (read-only memory - ROM);
  • supported curves: sRGB EOTF, BT.709 inverse OETF, PQ EOTF and HLG OETF, Gamma 2.2, Gamma 2.4 and Gamma 2.6 EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. Setting TF = Identity/Default and LUT as NULL means bypass. References:

AMD Plane 3x4 CTM (Color Transformation Matrix) AMD Plane CTM data goes to the DPP Gamut Remap block, supporting a 3x4 fixed point (s31.32) matrix for color space conversions. The data is interpreted as a struct drm_color_ctm_3x4. Setting NULL means bypass. References:

AMD Plane Shaper: TF + 1D LUT Described by: dc->caps.color.dpp.hw_3d_lut The Shaper block fine-tunes color adjustments before applying the 3D LUT, optimizing the use of the limited entries in each dimension of the 3D LUT. On AMD hardware, a 3D LUT always means a preceding shaper 1D LUT used for delinearizing and/or normalizing the color space before applying a 3D LUT, so this entry on DPP color caps dc->caps.color.dpp.hw_3d_lut means support for both shaper 1D LUT and 3D LUT. Pre-defined transfer function enables delinearizing content with or without shaper LUT, where AMD color module calculates the resulted shaper curve. Shaper curves go from linear values to encoded values. If we are already in a non-linear space and/or don t need to normalize values, we can set a Identity TF for shaper that works similar to bypass and is also the default TF value. Pre-defined transfer functions:
  • there is no DPP Shaper ROM. Curves are calculated by AMD color modules. Check calculate_curve() function in the file amd/display/modules/color/color_gamma.c.
  • supported curves: Identity, sRGB inverse EOTF, BT.709 OETF, PQ inverse EOTF, HLG OETF, and Gamma 2.2, Gamma 2.4, Gamma 2.6 inverse EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. When setting Plane Shaper TF (!= Identity) and LUT at the same time, the color module will combine the pre-defined TF and the custom LUT values into the LUT that s actually programmed. Setting TF = Identity/Default and LUT as NULL works as bypass. References:

AMD Plane 3D LUT Described by: dc->caps.color.dpp.hw_3d_lut The 3D LUT in the DPP block facilitates complex color transformations and adjustments. 3D LUT is a three-dimensional array where each element is an RGB triplet. As mentioned before, the dc->caps.color.dpp.hw_3d_lut describe if DPP 3D LUT is supported. The AMD driver-specific property advertise the size of a single dimension via LUT3D_SIZE property. Plane 3D LUT is a blog property where the data is interpreted as an array of struct drm_color_lut elements and the number of entries is LUT3D_SIZE cubic. The array contains samples from the approximated function. Values between samples are estimated by tetrahedral interpolation The array is accessed with three indices, one for each input dimension (color channel), blue being the outermost dimension, red the innermost. This distribution is better visualized when examining the code in [RFC PATCH 5/5] drm/amd/display: Fill 3D LUT from userspace by Alex Hung:
+	for (nib = 0; nib < 17; nib++)  
+		for (nig = 0; nig < 17; nig++)  
+			for (nir = 0; nir < 17; nir++)  
+				ind_lut = 3 * (nib + 17*nig + 289*nir);
+
+				rgb_area[ind].red = rgb_lib[ind_lut + 0];
+				rgb_area[ind].green = rgb_lib[ind_lut + 1];
+				rgb_area[ind].blue = rgb_lib[ind_lut + 2];
+				ind++;
+			 
+		 
+	 
In our driver-specific approach we opted to advertise it s behavior to the userspace instead of implicitly dealing with it in the kernel driver. AMD s hardware supports 3D LUTs with 17-size or 9-size (4913 and 729 entries respectively), and you can choose between 10-bit or 12-bit. In the current driver-specific work we focus on enabling only 17-size 12-bit 3D LUT, as in [PATCH v3 25/32] drm/amd/display: add plane 3D LUT support:
+		/* Stride and bit depth are not programmable by API yet.
+		 * Therefore, only supports 17x17x17 3D LUT (12-bit).
+		 */
+		lut->lut_3d.use_tetrahedral_9 = false;
+		lut->lut_3d.use_12bits = true;
+		lut->state.bits.initialized = 1;
+		__drm_3dlut_to_dc_3dlut(drm_lut, drm_lut3d_size, &lut->lut_3d,
+					lut->lut_3d.use_tetrahedral_9,
+					MAX_COLOR_3DLUT_BITDEPTH);
A refined control of 3D LUT parameters should go through a follow-up version or generic API. Setting 3D LUT to NULL means bypass. References:

AMD Plane Blend/Out Gamma: TF + 1D LUT Described by: dc->caps.color.dpp.ogam_ram The Blend/Out Gamma block applies the final touch-up before blending, allowing users to linearize content after 3D LUT and just before the blending. It supports both 1D LUT and pre-defined TF. We can see Shaper and Blend LUTs as 1D LUTs that are sandwich the 3D LUT. So, if we don t need 3D LUT transformations, we may want to only use Degamma block to linearize and skip Shaper, 3D LUT and Blend. Pre-defined transfer function:
  • there is no DPP Blend ROM. Curves are calculated by AMD color modules;
  • supported curves: Identity, sRGB EOTF, BT.709 inverse OETF, PQ EOTF, HLG inverse OETF, and Gamma 2.2, Gamma 2.4, Gamma 2.6 EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. If plane_blend_tf_property != Identity TF, AMD color module will combine the user LUT values with pre-defined TF into the LUT parameters to be programmed. Setting TF = Identity/Default and LUT to NULL means bypass. References:

MPC Color Pipeline: After Blending (Per CRTC)

DRM CRTC Degamma 1D LUT The degamma lookup table (LUT) for converting framebuffer pixel data before apply the color conversion matrix. The data is interpreted as an array of struct drm_color_lut elements. Setting NULL means bypass. Not really supported. The driver is currently reusing the DPP degamma LUT block (dc->caps.color.dpp.dgam_ram and dc->caps.color.dpp.gamma_corr) for supporting DRM CRTC Degamma LUT, as explaning by [PATCH v3 20/32] drm/amd/display: reject atomic commit if setting both plane and CRTC degamma.

DRM CRTC 3x3 CTM Described by: dc->caps.color.mpc.gamut_remap It sets the current transformation matrix (CTM) apply to pixel data after the lookup through the degamma LUT and before the lookup through the gamma LUT. The data is interpreted as a struct drm_color_ctm. Setting NULL means bypass.

DRM CRTC Gamma 1D LUT + AMD CRTC Gamma TF Described by: dc->caps.color.mpc.ogam_ram After all that, you might still want to convert the content to wire encoding. No worries, in addition to DRM CRTC 1D LUT, we ve got a AMD CRTC gamma transfer function (TF) to make it happen. Possible TF values are defined by enum amdgpu_transfer_function. Pre-defined transfer functions:
  • there is no MPC Gamma ROM. Curves are calculated by AMD color modules.
  • supported curves: Identity, sRGB inverse EOTF, BT.709 OETF, PQ inverse EOTF, HLG OETF, and Gamma 2.2, Gamma 2.4, Gamma 2.6 inverse EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. When setting CRTC Gamma TF (!= Identity) and LUT at the same time, the color module will combine the pre-defined TF and the custom LUT values into the LUT that s actually programmed. Setting TF = Identity/Default and LUT to NULL means bypass. References:

Others

AMD CRTC Shaper and 3D LUT We have previously worked on exposing CRTC shaper and CRTC 3D LUT, but they were removed from the AMD driver-specific color series because they lack userspace case. CRTC shaper and 3D LUT works similar to plane shaper and 3D LUT but after blending (MPC block). The difference here is that setting (not bypass) Shaper and Gamma blocks together are not expected, since both blocks are used to delinearize the input space. In summary, we either set Shaper + 3D LUT or Gamma.

Input and Output Color Space Conversion There are two other color capabilities of AMD display hardware that were integrated to DRM by previous works and worth a brief explanation here. The DC Input CSC sets pre-defined coefficients from the values of DRM plane color_range and color_encoding properties. It is used for color space conversion of the input content. On the other hand, we have de DC Output CSC (OCSC) sets pre-defined coefficients from DRM connector colorspace properties. It is uses for color space conversion of the composed image to the one supported by the sink. References:

The search for rainbow treasures is not over yet If you want to understand a little more about this work, be sure to watch Joshua and I presented two talks at XDC 2023 about AMD/Steam Deck colors on Gamescope: In the time between the first and second part of this blog post, Uma Shashank and Chaitanya Kumar Borah published the plane color pipeline for Intel and Harry Wentland implemented a generic API for DRM based on VKMS support. We discussed these two proposals and the next steps for Color on Linux during the Color Management workshop at XDC 2023 and I briefly shared workshop results in the 2023 XDC lightning talk session. The search for rainbow treasures is not over yet! We plan to meet again next year in the 2024 Display Hackfest in Coru a-Spain (Igalia s HQ) to keep up the pace and continue advancing today s display needs on Linux. Finally, a HUGE thank you to everyone who worked with me on exploring AMD s color capabilities and making them available in userspace.

5 November 2023

Thorsten Alteholz: My Debian Activities in October 2023

FTP master This month I accepted 361 and rejected 34 packages. The overall number of packages that got accepted was 362. Debian LTS This was my hundred-twelfth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded: Unfortunately upstream still could not resolve whether the patch for CVE-2023-42118 of libspf2 is valid, so no progress happened here.
I also continued to work on bind9 and try to understand why some tests fail. Last but not least I did some days of frontdesk duties and took part in the LTS meeting. Debian ELTS This month was the sixty-third ELTS month. During my allocated time I uploaded: I also continued to work on bind9 and as with the version in LTS, I try to understand why some tests fail. Last but not least I did some days of frontdesk duties . Debian Printing This month I uploaded a new upstream version of: Within the context of preserving old printing packages, I adopted: If you know of any other package that is also needed and still maintained by the QA team, please tell me. I also uploaded new upstream version of packages or uploaded a package to fix one or the other issue: This work is generously funded by Freexian! Debian Mobcom This month I uploaded a package to fix one or the other issue: Other stuff This month I uploaded new upstream version of packages, did a source upload for the transition or uploaded it to fix one or the other issue:

2 November 2023

Fran ois Marier: Upgrading from Debian 11 bullseye to 12 bookworm

Over the last few months, I upgraded my Debian machines from bullseye to bookworm. The process was uneventful, but I ended up reconfiguring several things afterwards in order to modernize my upgraded machines.

Logcheck I noticed in this release that the transition to journald is essentially complete. This means that rsyslog is no longer needed on most of my systems:
apt purge rsyslog
Once that was done, I was able to comment out the following lines in /etc/logcheck/logcheck.logfiles.d/syslog.logfiles:
#/var/log/syslog
#/var/log/auth.log
I did have to adjust some of my custom logcheck rules, particularly the ones that deal with kernel messages:
--- a/logcheck/ignore.d.server/local-kernel
+++ b/logcheck/ignore.d.server/local-kernel
@@ -1,1 +1,1 @@
-^\w 3  [ :[:digit:]] 11  [._[:alnum:]-]+ kernel: \[[0-9. ]+]\ IN=eno1 OUT= MAC=[0-9a-f:]+ SRC=[0-9a-f.:]+
+^\w 3  [ :[:digit:]] 11  [._[:alnum:]-]+ kernel: (\[[0-9. ]+]\ )?IN=eno1 OUT= MAC=[0-9a-f:]+ SRC=[0-9a-f.:]+
Then I moved local entries from /etc/logcheck/logcheck.logfiles to /etc/logcheck/logcheck.logfiles.d/local.logfiles (/var/log/syslog and /var/log/auth.log are enabled by default when needed) and removed some files that are no longer used:
rm /var/log/mail.err*
rm /var/log/mail.warn*
rm /var/log/mail.info*
Finally, I had to fix any unescaped characters in my local rules. For example error == NULL \*error == NULL must now be written as error == NULL \ \ \*error == NULL.

Networking After the upgrade, I got a notice that the isc-dhcp-client is now deprecated and so I removed if from my system:
apt purge isc-dhcp-client
This however meant that I need to ensure that my network configuration software does not depend on the now-deprecated DHCP client. On my laptop, I was already using NetworkManager for my main network interfaces and that has built-in DHCP support.

Migration to systemd-networkd On my backup server, I took this opportunity to switch from ifupdown to systemd-networkd by removing ifupdown:
apt purge ifupdown
rm /etc/network/interfaces
putting the following in /etc/systemd/network/20-wired.network:
[Match]
Name=eno1
[Network]
DHCP=yes
MulticastDNS=yes
and then enabling/starting systemd-networkd:
systemctl enable systemd-networkd
systemctl start systemd-networkd
I also needed to install polkit:
apt install --no-install-recommends policykit-1
in order to allow systemd-networkd to set the hostname. In order to start my firewall automatically as interfaces are brought up, I wrote a dispatcher script to apply my existing iptables rules.

Migration to predictacle network interface names On my Linode server, I did the same as on the backup server, but I put the following in /etc/systemd/network/20-wired.network since it has a static IPv6 allocation:
[Match]
Name=enp0s4
[Network]
DHCP=yes
Address=2600:3c01::xxxx:xxxx:xxxx:939f/64
Gateway=fe80::1
and switched to predictable network interface names by deleting these two files:
  • /etc/systemd/network/50-virtio-kernel-names.link
  • /etc/systemd/network/99-default.link
and then changing eth0 to enp0s4 in:
  • /etc/network/iptables.up.rules
  • /etc/network/ip6tables.up.rules
  • /etc/rc.local (for OpenVPN)
  • /etc/logcheck/ignored.d.*/*
Then I regenerated all initramfs:
update-initramfs -u -k all
and rebooted the virtual machine. Giving systemd-resolved control of /etc/resolv.conf After reading this history of DNS resolution on Linux, I decided to modernize my resolv.conf setup and let systemd-resolved handle /etc/resolv.conf. I installed the package:
apt install systemd-resolved
and then removed no-longer-needed packages:
apt purge resolvconf avahi-daemon
I also disabled support for Link-Local Multicast Name Resolution (LLMNR) after reading this person's reasoning by putting the following in /etc/systemd/resolved.conf.d/llmnr.conf:
[Resolve]
LLMNR=no
I verified that mDNS is enabled and LLMNR is disabled:
$ resolvectl mdns
Global: yes
Link 2 (enp0s25): yes
Link 3 (wlp3s0): yes
$ resolvectl llmnr
Global: no
Link 2 (enp0s25): no
Link 3 (wlp3s0): no
Note that if you want auto-discovery of local printers using CUPS, you need to keep avahi-daemon since cups-browsed doesn't support systemd-resolved. You can verify that it works using:
sudo lpinfo --include-schemes dnssd -v

Dynamic DNS I replaced ddclient with inadyn since it doesn't work with no-ip.com anymore, using the configuration I described in an old blog post.

chkrootkit I moved my customizations in /etc/chkrootkit.conf to /etc/chkrootkit/chkrootkit.conf after seeing this message in my logs:
WARNING: /etc/chkrootkit.conf is deprecated. Please put your settings in /etc/chkrootkit/chkrootkit.conf instead: /etc/chkrootkit.conf will be ignored in a future release and should be deleted.

ssh As mentioned in Debian bug#1018106, to silence the following warnings:
sshd[6283]: pam_env(sshd:session): deprecated reading of user environment enabled
I changed the following in /etc/pam.d/sshd:
--- a/pam.d/sshd
+++ b/pam.d/sshd
@@ -44,7 +44,7 @@ session    required     pam_limits.so
 session    required     pam_env.so # [1]
 # In Debian 4.0 (etch), locale-related environment variables were moved to
 # /etc/default/locale, so read that as well.
-session    required     pam_env.so user_readenv=1 envfile=/etc/default/locale
+session    required     pam_env.so envfile=/etc/default/locale
 # SELinux needs to intervene at login time to ensure that the process starts
 # in the proper default security context.  Only sessions which are intended
I also made the following changes to /etc/ssh/sshd_config.d/local.conf based on the advice of ssh-audit 2.9.0:
-KexAlgorithms curve25519-sha256@libssh.org,curve25519-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha256
+KexAlgorithms curve25519-sha256@libssh.org,curve25519-sha256,sntrup761x25519-sha512@openssh.com,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512

1 November 2023

Matthew Garrett: Why ACPI?

"Why does ACPI exist" - - the greatest thread in the history of forums, locked by a moderator after 12,239 pages of heated debate, wait no let me start again.

Why does ACPI exist? In the beforetimes power management on x86 was done by jumping to an opaque BIOS entry point and hoping it would do the right thing. It frequently didn't. We called this Advanced Power Management (Advanced because before this power management involved custom drivers for every machine and everyone agreed that this was a bad idea), and it involved the firmware having to save and restore the state of every piece of hardware in the system. This meant that assumptions about hardware configuration were baked into the firmware - failed to program your graphics card exactly the way the BIOS expected? Hurrah! It's only saved and restored a subset of the state that you configured and now potential data corruption for you. The developers of ACPI made the reasonable decision that, well, maybe since the OS was the one setting state in the first place, the OS should restore it.

So far so good. But some state is fundamentally device specific, at a level that the OS generally ignores. How should this state be managed? One way to do that would be to have the OS know about the device specific details. Unfortunately that means you can't ship the computer without having OS support for it, which means having OS support for every device (exactly what we'd got away from with APM). This, uh, was not an option the PC industry seriously considered. The alternative is that you ship something that abstracts the details of the specific hardware and makes that abstraction available to the OS. This is what ACPI does, and it's also what things like Device Tree do. Both provide static information about how the platform is configured, which can then be consumed by the OS and avoid needing device-specific drivers or configuration to be built-in.

The main distinction between Device Tree and ACPI is that Device Tree is purely a description of the hardware that exists, and so still requires the OS to know what's possible - if you add a new type of power controller, for instance, you need to add a driver for that to the OS before you can express that via Device Tree. ACPI decided to include an interpreted language to allow vendors to expose functionality to the OS without the OS needing to know about the underlying hardware. So, for instance, ACPI allows you to associate a device with a function to power down that device. That function may, when executed, trigger a bunch of register accesses to a piece of hardware otherwise not exposed to the OS, and that hardware may then cut the power rail to the device to power it down entirely. And that can be done without the OS having to know anything about the control hardware.

How is this better than just calling into the firmware to do it? Because the fact that ACPI declares that it's going to access these registers means the OS can figure out that it shouldn't, because it might otherwise collide with what the firmware is doing. With APM we had no visibility into that - if the OS tried to touch the hardware at the same time APM did, boom, almost impossible to debug failures (This is why various hardware monitoring drivers refuse to load by default on Linux - the firmware declares that it's going to touch those registers itself, so Linux decides not to in order to avoid race conditions and potential hardware damage. In many cases the firmware offers a collaborative interface to obtain the same data, and a driver can be written to get that. this bug comment discusses this for a specific board)

Unfortunately ACPI doesn't entirely remove opaque firmware from the equation - ACPI methods can still trigger System Management Mode, which is basically a fancy way to say "Your computer stops running your OS, does something else for a while, and you have no idea what". This has all the same issues that APM did, in that if the hardware isn't in exactly the state the firmware expects, bad things can happen. While historically there were a bunch of ACPI-related issues because the spec didn't define every single possible scenario and also there was no conformance suite (eg, should the interpreter be multi-threaded? Not defined by spec, but influences whether a specific implementation will work or not!), these days overall compatibility is pretty solid and the vast majority of systems work just fine - but we do still have some issues that are largely associated with System Management Mode.

One example is a recent Lenovo one, where the firmware appears to try to poke the NVME drive on resume. There's some indication that this is intended to deal with transparently unlocking self-encrypting drives on resume, but it seems to do so without taking IOMMU configuration into account and so things explode. It's kind of understandable why a vendor would implement something like this, but it's also kind of understandable that doing so without OS cooperation may end badly.

This isn't something that ACPI enabled - in the absence of ACPI firmware vendors would just be doing this unilaterally with even less OS involvement and we'd probably have even more of these issues. Ideally we'd "simply" have hardware that didn't support transitioning back to opaque code, but we don't (ARM has basically the same issue with TrustZone). In the absence of the ideal world, by and large ACPI has been a net improvement in Linux compatibility on x86 systems. It certainly didn't remove the "Everything is Windows" mentality that many vendors have, but it meant we largely only needed to ensure that Linux behaved the same way as Windows in a finite number of ways (ie, the behaviour of the ACPI interpreter) rather than in every single hardware driver, and so the chances that a new machine will work out of the box are much greater than they were in the pre-ACPI period.

There's an alternative universe where we decided to teach the kernel about every piece of hardware it should run on. Fortunately (or, well, unfortunately) we've seen that in the ARM world. Most device-specific simply never reaches mainline, and most users are stuck running ancient kernels as a result. Imagine every x86 device vendor shipping their own kernel optimised for their hardware, and now imagine how well that works out given the quality of their firmware. Does that really seem better to you?

It's understandable why ACPI has a poor reputation. But it's also hard to figure out what would work better in the real world. We could have built something similar on top of Open Firmware instead but the distinction wouldn't be terribly meaningful - we'd just have Forth instead of the ACPI bytecode language. Longing for a non-ACPI world without presenting something that's better and actually stands a reasonable chance of adoption doesn't make the world a better place.

comment count unavailable comments

Junichi Uekawa: Already November.

Already November. Been playing a bit with ureadahead. Build systems are hard. Resurrecting libnih to build again was a pain. Some tests rely on not being root so simply running things in Docker made it fail.

31 October 2023

Iustin Pop: Raspberry PI OS: upgrading and cross-grading

One of the downsides of running Raspberry PI OS is the fact that - not having the resources of pure Debian - upgrades are not recommended, and cross-grades (migrating between armhf and arm64) is not even mentioned. Is this really true? It is, after all a Debian-based system, so it should in theory be doable. Let s try!

Upgrading The recently announced release based on Debian Bookworm here says:
We have always said that for a major version upgrade, you should re-image your SD card and start again with a clean image. In the past, we have suggested procedures for updating an existing image to the new version, but always with the caveat that we do not recommend it, and you do this at your own risk. This time, because the changes to the underlying architecture are so significant, we are not suggesting any procedure for upgrading a Bullseye image to Bookworm; any attempt to do this will almost certainly end up with a non-booting desktop and data loss. The only way to get Bookworm is either to create an SD card using Raspberry Pi Imager, or to download and flash a Bookworm image from here with your tool of choice.
Which means, it s time to actually try it turns out it s actually trivial, if you use RPIs as headless servers. I had only three issues:
  • if using an initrd, the new initrd-building scripts/hooks are looking for some binaries in /usr/bin, and not in /bin; solution: install manually the usrmerge package, and then re-run dpkg --configure -a;
  • also if using an initrd, the scripts are looking for the kernel config file in /boot/config-$(uname -r), and the raspberry pi kernel package doesn t provide this; workaround: modprobe configs && zcat /proc/config.gz > /boot/config-$(uname -r);
  • and finally, on normal RPI systems, that don t use manual configurations of interfaces in /etc/network/interface, migrating from the previous dhcpcd to NetworkManager will break network connectivity, and require you to log in locally and fix things.
I expect most people to hit only the 3rd, and almost no-one to use initrd on raspberry pi. But, overall, aside from these two issues and a couple of cosmetic ones (login.defs being rewritten from scratch and showing a baffling diff, for example), it was easy. Is it worth doing? Definitely. Had no data loss, and no non-booting system.

Cross-grading (32 bit to 64 bit userland) This one is actually painful. Internet searches go from it s possible, I think to it s definitely not worth trying . Examples: Aside from these, there are a gazillion other posts about switching the kernel to 64 bit. And that s worth doing on its own, but it s only half the way. So, armed with two different systems - a RPI4 4GB and a RPI Zero W2 - I tried to do this. And while it can be done, it takes many hours - first system was about 6 hours, second the same, and a third RPI4 probably took ~3 hours only since I knew the problematic issues. So, what are the steps? Basically:
  • install devscripts, since you will need dget
  • enable new architecture in dpkg: dpkg --add-architecture arm64
  • switch over apt sources to include the 64 bit repos, which are different than the 32 bit ones (Raspberry PI OS did a migration here; normally a single repository has all architectures, of course)
  • downgrade all custom rpi packages/libraries to the standard bookworm/bullseye version, since dpkg won t usually allow a single library package to have different versions (I think it s possible to override, but I didn t bother)
  • install libc for the arm64 arch (this takes some effort, it s actually a set of 3-4 packages)
  • once the above is done, install whiptail:amd64 and rejoice at running a 64-bit binary!
  • then painfully go through sets of packages and migrate the set to arm64:
    • sometimes this work via apt, sometimes you ll need to use dget and dpkg -i
    • make sure you download both the armhf and arm64 versions before doing dpkg -i, since you ll need to rollback some installs
  • at one point, you ll be able to switch over dpkg and apt to arm64, at which point the default architecture flips over; from here, if you ve done it at the right moment, it becomes very easy; you ll probably need an apt install --fix-broken, though, at first
  • and then, finish by replacing all packages with arm64 versions
  • and then, dpkg --remove-architecture armhf, reboot, and profit!
But it s tears and blood to get to that point

Pain point 1: RPI custom versions of packages Since the 32bit armhf architecture is a bit weird - having many variations - it turns out that raspberry pi OS has many packages that are very slightly tweaked to disable a compilation flag or work around build/test failures, or whatnot. Since we talk here about 64-bit capable processors, almost none of these are needed, but they do make life harder since the 64 bit version doesn t have those overrides. So what is needed would be to say downgrade all armhf packages to the version in debian upstream repo , but I couldn t find the right apt pinning incantation to do that. So what I did was to remove the 32bit repos, then use apt-show-versions to see which packages have versions that are no longer in any repo, then downgrade them. There s a further, minor, complication that there were about 3-4 packages with same version but different hash (!), which simply needed apt install --reinstall, I think.

Pain point 2: architecture independent packages There is one very big issue with dpkg in all this story, and the one that makes things very problematic: while you can have a library package installed multiple times for different architectures, as the files live in different paths, a non-library package can only be installed once (usually). For binary packages (arch:any), that is fine. But architecture-independent packages (arch:all) are problematic since usually they depend on a binary package, but they always depend on the default architecture version! Hrmm, and I just realise I don t have logs from this, so I m only ~80% confident. But basically:
  • vim-solarized (arch:all) depends on vim (arch:any)
  • if you replace vim armhf with vim arm64, this will break vim-solarized, until the default architecture becomes arm64
So you need to keep track of which packages apt will de-install, for later re-installation. It is possible that Multi-Arch: foreign solves this, per the debian wiki which says:
Note that even though Architecture: all and Multi-Arch: foreign may look like similar concepts, they are not. The former means that the same binary package can be installed on different architectures. Yet, after installation such packages are treated as if they were native architecture (by definition the architecture of the dpkg package) packages. Thus Architecture: all packages cannot satisfy dependencies from other architectures without being marked Multi-Arch foreign.
It also has warnings about how to properly use this. But, in general, not many packages have it, so it is a problem.

Pain point 3: remove + install vs overwrite It seems that depending on how the solver computes a solution, when migrating a package from 32 to 64 bit, it can choose either to:
  • overwrite in place the package (akin to dpkg -i)
  • remove + install later
The former is OK, the later is not. Or, actually, it might be that apt never can do this, for example (edited for brevity):
# apt install systemd:arm64 --no-install-recommends
The following packages will be REMOVED:
  systemd
The following NEW packages will be installed:
  systemd:arm64
0 upgraded, 1 newly installed, 1 to remove and 35 not upgraded.
Do you want to continue? [Y/n] y
dpkg: systemd: dependency problems, but removing anyway as you requested:
 systemd-sysv depends on systemd.
Removing systemd (247.3-7+deb11u2) ...
systemd is the active init system, please switch to another before removing systemd.
dpkg: error processing package systemd (--remove):
 installed systemd package pre-removal script subprocess returned error exit status 1
dpkg: too many errors, stopping
Errors were encountered while processing:
 systemd
Processing was halted because there were too many errors.
But at the same time, overwrite in place is all good - via dpkg -i from /var/cache/apt/archives. In this case it manifested via a prerm script, in other cases is manifests via dependencies that are no longer satisfied for packages that can t be removed, etc. etc. So you will have to resort to dpkg -i a lot.

Pain point 4: lib- packages that are not lib During the whole process, it is very tempting to just go ahead and install the corresponding arm64 package for all armhf lib package, in one go, since these can coexist. Well, this simple plan is complicated by the fact that some packages are named libfoo-bar, but are actual holding (e.g.) the bar binary for the libfoo package. Examples:
  • libmagic-mgc contains /usr/lib/file/magic.mgc, which conflicts between the 32 and 64 bit versions; of course, it s the exact same file, so this should be an arch:all package, but
  • libpam-modules-bin and liblockfile-bin actually contain binaries (per the -bin suffix)
It s possible to work around all this, but it changes a 1 minute:
# apt install $(dpkg -i   grep ^ii   awk ' print $2 ' grep :amrhf sed -e 's/:armhf/:arm64')
into a 10-20 minutes fight with packages (like most other steps).

Is it worth doing? Compared to the simple bullseye bookworm upgrade, I m not sure about this. The result? Yes, definitely, the system feels - weirdly - much more responsive, logged in over SSH. I guess the arm64 base architecture has some more efficient ops than the lowest denominator armhf , so to say (e.g. there was in the 32 bit version some rpi-custom package with string ops), and thus migrating to 64 bit makes more things faster , but this is subjective so it might be actually not true. But from the point of view of the effort? Unless you like to play with dpkg and apt, and understand how these work and break, I d rather say, migrate to ansible and automate the deployment. It s doable, sure, and by the third system, I got this nailed down pretty well, but it was a lot of time spent. The good aspect is that I did 3 migrations:
  • rpi zero w2: bullseye 32 bit to 64 bit, then bullseye to bookworm
  • rpi 4: bullseye to bookworm, then bookworm 32bit to 64 bit
  • same, again, for a more important system
And all three worked well and no data loss. But I m really glad I have this behind me, I probably wouldn t do a fourth system, even if forced And now, waiting for the RPI 5 to be available See you!

Russell Coker: Links October 2023

The Daily Kos has an interesting article about a new more effective method of desalination [1]. Here is a video of a crazy guy zapping things with 100 car batteries [2]. This is sonmething you should avoid if you want to die of natural causes. Does dying while making a science video count for a Darwin Award? A Hacker News comment has an interesting explanation of Unix signals [3]. Interesting documentary on the rise of mega corporations [4]. We need to split up Google, Facebook, and Amazon ASAP. Also every phone platform should have competing app stores. Dave Taht gave an interesting LCA lecture about Internet congestion control [5]. He also referenced a web site about projects to alleviate the buffer bloat problem [6]. This tiny event based sensor is an interesting product [7]. It could lead to some interesting (but possibly invasive) technological developments in phones. Tara Barnett s Everything Open lecture Swiss Army GLAM had some interesting ideas for community software development [8]. Having lots of small programs communicating with APIs is an interesting way to get people into development. Actually Hardcore Overclocking has an interesting youtube video about the differences between x8 and x14 DDR4 DIMMs [9]. Interesting YouTube video from someone who helped the Kurds defend against Turkey about how war tunnels work [10]. He makes a strong case that the Israeli invasion of the Gaza Strip won t be easy or pleasant.

29 October 2023

Aigars Mahinovs: Figuring out finances part 4

At the end of the last part of this, we got a Home Assistant OS installation that contains in itself a Firefly III instance and that contains all the current financial information. Now I will try to connect the two. While it could be nice to create a fully-featured integration for Firefly III to Home Assistant to communicate all interesting values and events, I have an interest on programming a more advanced data point calculation for my budget needs, so a less generic, but more flexible approch is a better one for me. So I was quite interested when among the addons in the Home Assistant Addon Store I saw AppDaemon - a way to simply integrate arbitrary Python processing with Home Assistant. Let's see if that can do what I want. For start, after reading the tutorial , I wanted to create a simple script that would use Firefly III REST API to read the current balance of my main account and then send that to Home Assistant as a sensor value, which then can be displayed on a dashboard. As a quick try I modified the provided hello_world.py that is included in the default AppDaemon installation:
import requests
from datetime import datetime
import appdaemon.plugins.hass.hassapi as hass
app_token = "<FIREFLY_PERSONAL_ACCESS_TOKEN>"
firefly_url = "<FIREFLY_URL>"
class HelloWorld(hass.Hass):
    def initialize(self):
        self.run_every(self.set_asset, "now", 60 * 60)
    def set_asset(self, kwargs):
        ent = self.get_entity("sensor.firefly3_asset_sparkasse_main")
        if not ent.exists():
            ent.add(
                state=0.0,
                attributes= 
                    "native_value": 0.0,
                    "native_unit_of_measurement": "EUR",
                    "state_class": "measurement",
                    "device_class": "monetary",
                    "current_balance_date": datetime.now(),
                 )
        r = requests.get(
            firefly_url + "/api/v1/accounts?type=asset",
            headers= 
                "Authorization": "Bearer " + app_token,
                "Accept": "application/vnd.api+json",
                "Content-Type": "application/json",
         )
        data = r.json()
        for account in data["data"]:
            if not "attributes" in account or "name" not in account["attributes"]:
                continue
            if account["attributes"]["name"] != "Sparkasse giro":
                continue
            self.log("Account :" + str(account["attributes"]))
            ent.set_state(
                state=account["attributes"]["current_balance"],
                attributes= 
                    "native_value": account["attributes"]["current_balance"],
                    "current_balance_date": datetime.fromisoformat(account["attributes"]["current_balance_date"]),
                 )
            self.log("Entity updated")
It uses a URL and personal access token to access Firefly III API, gets the asset accounts information, then extracts info about current balance and balance date of my main account and then creates and/or updates a "sensor" value into Home Assistant. This sensor is with metadata marked as a monetary value and as a measurement. This makes Home Assistant track this value in the database as a graphable changing value. I modified the file using the File Editor addon to edit the /config/appdaemon/apps/hello.py file. Each time the file is saved it is reloaded and logs can be seen in the AppDaemon Logs section - main_log for logging messages or error_log if there is a crash. Useful to know that requests library is included, but it hard to see in the docks what else is included or if there is an easy way to install extra Python packages. This is already a very nice basis for custom value insertion into Home Assistant - whatever you can with a Python script extract or calculate, you can also inject into Home Assistant. With even this simple approach you can monitor balances, budgets, piggy-banks, bill payment status and even sum of transactions in particular catories in a particular time window. Especially interesting data can be found in the insight section of the Firefly III API. The script above uses a trigger like self.run_every(self.set_asset, "now", 60 * 60) to simply run once per hour. The data in Firefly will not be updated too often anyway, at least not until we figure out how to make bank connection run automatically without user interaction and not screw up already existing transactions along the way. In theory a webhook API of the Firefly III could be used to trigger the data update instantly when any transaction is created or updated. Possibly even using Home Assistant webhook integration. Hmmm. Maybe. Who am I kiddind? I am going to make that work, for sure! :D But first - how about figuring out the future? So what I want to do? In short, I want to predict what will be the balance on my main account just before the next months salary comes in. To do this I would:
  • take the current balance of the main account
  • if this months salary is not paid out yet, then add that into the balance
  • deduct all still unpaid bills that are due between now and the target date
  • if the credit card account has not yet been reset to the main account, deduct current amount on the cards
  • if credit card account has been reset, but not from main account deducted yet, deduct the reset amount
To do that I need to use the Firefly API to read: current account info, status of all bills including next due date and amount, transfer transactions between credit cards and main account and something that would store the expected salary date and amount. Ideally I'd use a recurring transaction or a income bill for this, but Firefly is not really cooperating with that. The easiest would be just to hardcode that in the script itself. And this is what I have come up with so far. To make the development process easier, I separated put the params for the API key and salary info and app params for the month to predict for, and predict both this and next months balances at the same time. I edited the script locally with Neovim and also ran it locally with a few mocks, uploading to Home Assistant via the SSH addon when the local executions looked good. So what's next? Well, need to somewhat automate the sync with the bank (if at all possible). And for sure take a regular database and config backup :D

24 October 2023

Iustin Pop: OS updates are damn easy nowadays!

I m baffled at how simple and reliable operating system updates have become. Upgraded Debian bullseye to bookworm, across a few systems, easy. On VMs, it s even so fast that installing base system from scratch is probably the same time. But Linux/Debian OFC works well. Shall we look at MacOS? Takes longer, but just runs and reboots a couple of times and then, bam, it s up and with windows restored. Surely Windows is the outlier? Nah, finally said yes to the Upgrade to Win 11? prompt, and it took a while to download (why is Win/Mac so heavy and slow to download? Debian just flies!), then rebooted a few times, and again, bam, it s up and GoG and Steam still work. I swear, there was a time when updating the OS felt like an accomplishment. Now, except for Raspberry Pi OS ( upgrades not supported, reinstall! but I bet they also work), upgrading an actual OS is just like new Android/iOS version. And yes, get off my lawn! I still have a lower digit count Slashdot ID

23 October 2023

Russ Allbery: Review: Going Postal

Review: Going Postal, by Terry Pratchett
Series: Discworld #33
Publisher: Harper
Copyright: October 2004
Printing: November 2014
ISBN: 0-06-233497-2
Format: Mass market
Pages: 471
Going Postal is the 33rd Discworld novel. You could probably start here if you wanted to; there are relatively few references to previous books, and the primary connection (to Feet of Clay) is fully re-explained. I suspect that's why Going Postal garnered another round of award nominations. There are arguable spoilers for Feet of Clay, however. Moist von Lipwig is a con artist. Under a wide variety of names, he's swindled and forged his way around the Disc, always confident that he can run away from or talk his way out of any trouble. As Going Postal begins, however, it appears his luck has run out. He's about to be hanged. Much to his surprise, he wakes up after his carefully performed hanging in Lord Vetinari's office, where he's offered a choice. He can either take over the Ankh-Morpork post office, or he can die. Moist, of course, immediately agrees to run the post office, and then leaves town at the earliest opportunity, only to be carried back into Vetinari's office by a relentlessly persistent golem named Mr. Pump. He apparently has a parole officer. The clacks, Discworld's telegraph system first seen in The Fifth Elephant, has taken over most communications. The city is now dotted with towers, and the Grand Trunk can take them at unprecedented speed to even far-distant cities like Genua. The post office, meanwhile, is essentially defunct, as Moist quickly discovers. There are two remaining employees, the highly eccentric Junior Postman Groat who is still Junior because no postmaster has lasted long enough to promote him, and the disturbingly intense Apprentice Postman Stanley, who collects pins. Other than them, the contents of the massive post office headquarters are a disturbing mail sorting machine designed by Bloody Stupid Johnson that is not picky about which dimension or timeline the sorted mail comes from, and undelivered mail. A lot of undelivered mail. Enough undelivered mail that there may be magical consequences. All Moist has to do is get the postal system running again. Somehow. And not die in mysterious accidents like the previous five postmasters. Going Postal is a con artist story, but it's also a startup and capitalism story. Vetinari is, as always, solving a specific problem in his inimitable indirect way. The clacks were created by engineers obsessed with machinery and encodings and maintenance, but it's been acquired by... well, let's say private equity, because that's who they are, although Discworld doesn't have that term. They immediately did what private equity always did: cut out everything that didn't extract profit, without regard for either the service or the employees. Since the clacks are an effective monopoly and the new owners are ruthless about eliminating any possible competition, there isn't much to stop them. Vetinari's chosen tool is Moist. There are some parts of this setup that I love and one part that I'm grumbly about. A lot of the fun of this book is seeing Moist pulled into the mission of resurrecting the post office despite himself. He starts out trying to wriggle out of his assigned task, but, after a few early successes and a supernatural encounter with the mail, he can't help but start to care. Reformed con men often make good protagonists because one can enjoy the charisma without disliking the ethics. Pratchett adds the delightfully sharp-witted and cynical Adora Belle Dearheart as a partial reader stand-in, which makes the process of Moist becoming worthy of his protagonist role even more fun. I think that a properly functioning postal service is one of the truly monumental achievements of human society and doesn't get nearly enough celebration (or support, or pay, or good working conditions). Give me a story about reviving a postal service by someone who appreciates the tradition and social role as much as Pratchett clearly does and I'm there. The only frustration is that Going Postal is focused more on an immediate plot, so we don't get to see the larger infrastructure recovery that is clearly needed. (Maybe in later books?) That leads to my grumble, though. Going Postal and specifically the takeover of the clacks is obviously inspired by corporate structures in the later Industrial Revolution, but this book was written in 2004, so it's also a book about private equity and startups. When Vetinari puts a con man in charge of the post office, he runs it like a startup: do lots of splashy things to draw attention, promise big and then promise even bigger, stumble across a revenue source that may or may not be sustainable, hire like mad, and hope it all works out. This makes for a great story in the same way that watching trapeze artists or tightrope walkers is entertaining. You know it's going to work because that's the sort of book you're reading, so you can enjoy the audacity and wonder how Moist will manage to stay ahead of his promises. But it is still a con game applied to a public service, and the part of me that loves the concept of the postal service couldn't stop feeling like this is part of the problem. The dilemma that Vetinari is solving is a bit too realistic, down to the requirement that the post office be self-funding and not depend on city funds and, well, this is repugnant to me. Public services aren't businesses. Societies spend money to build things that they need to maintain society, and postal service is just as much one of those things as roads are. The ability of anyone to send a letter to anyone else, no matter how rural the address is, provides infrastructure on which a lot of important societal structure is built. Pratchett made me care a great deal about Ankh-Morpork's post office (not hard to do), and now I want to see it rebuilt properly, on firm foundations, without splashy promises and without a requirement that it pay for itself. Which I realize is not the point of Discworld at all, but the concept of running a postal service like a startup hits maybe a bit too close to home. Apart from that grumble, this is a great book if you're in the mood for a reformed con man story. I thought the gold suit was a bit over the top, but I otherwise thought Moist's slow conversion to truly caring about his job was deeply satisfying. The descriptions of the clacks are full of askew Discworld parodies of computer networking and encoding that I enjoyed more than I thought I would. This is also the book that introduced the now-famous (among Pratchett fans at least) GNU instruction for the clacks, and I think that scene is the most emotionally moving bit of Pratchett outside of Night Watch. Going Postal is one of the better books in the Discworld series to this point (and I'm sadly getting near the end). If you have less strongly held opinions about management and funding models for public services, or at least are better at putting them aside when reading fantasy novels, you're likely to like it even more than I did. Recommended. Followed by Thud!. The thematic sequel is Making Money. Rating: 8 out of 10

22 October 2023

Ravi Dwivedi: Software Freedom Day at sflc.in

Software Freedom Law Center, India, also known as sflc.in, organized an event to celebrate the Software Freedom Day on 30th September 2023. Me, Sahil, Contrapunctus and Suresh joined. The venue was at the SFLC India office in Delhi. The sflc.in office was on the second floor of what looked like someone s apartment:). I also met Chirag, Orendra, Surbhi and others. My plan was to have a stall on LibreOffice and Prav app to raise awareness about these projects. I didn t have QR code for downloading prav app printed already, so I asked the people at sflc.in if they can get it printed for me. They were very kind and helped me in getting a color printout for me. So, I got a stall in their main room. Surbhi was having an Inkscape stall next to mine and gave me company. People came and asked about the prav project and then I realized I was still too tired to explain the idea behind the prav project and about LibreOffice (after a long Kerala trip). We got a few prav app installs during the event, which is cool.
My stall. Photo credits: Tejaswini.
Sahil had Debian stall and contrapunctus had OpenStreetMap stall. After about an hour, Revolution OS was screened for all of us to watch, along with popcorn. The documentary gave an overview of history of Free Software Movement. The office had a kitchen where fresh chai was being made and served to us. The organizers ordered a lot of good snacks for us.
Snacks and tea at the front desk. CC-BY-SA 4.0 by Ravi Dwivedi.
I came out of the movie hall to take more tea and snacks from the front desk. I saw a beautiful painting was hanging at the wall opposite to the front desk and Tejaswini (from sflc.in) revealed that she had made it. The tea was really good as it was freshly made in the kitchen. After the movie, we played a game of pictionary. We were divided into two teams. The game goes as follows: A person from a team is selected and given a term related to freedom respecting software written on a piece of paper, but concealed from other participants. Then that person draws something on the board (no logo, no alphabets) without speaking. If the team from which the person belongs correctly guesses the term, the team gets one step ahead on the leader board. The team who reaches the finish line wins. I recall some fun pictionaries. Like, the one in the picture below seems far from the word Wireguard and even then someone from the team guessed that word. Our team won in the end \o/.
Pictionary drawing nowhere close to the intended word Wireguard :), which was guessed. Photo by Ravi Dwivedi, CC-BY-SA 4.0.
Then, we posed for a group picture. At the end, SFLC.in had a delicious cake in store for us. They had some merchandise - handbags, T-shirts, etc. which we could take if we donate some amount to SFLC.in. I bought a handbag with Ban Plastic, not Internet written on it in exchange for donation. I hope that gives people around me a powerful message :) .
Group photo. Photo credits: Tejaswini.
Tasty cake. CC-BY-SA 4.0 by Ravi Dwivedi.
Merchandise by sflc.in. CC-BY-SA 4.0 by Ravi Dwivedi.
All in all, a nice event by sflc.in :)

Aigars Mahinovs: Figuring out finances part 3

So now that I have something that looks very much like a budgeting setup going, I am going to .. delete it! Why? Well, at the end of the last part of this, the Firefly III instance was running on a tiny Debian server in a Docker container right next to another Docker container that is running the main user of this server - a Home Assistant instance that has been managing my home for several years already. So why change that? See, there is one bit of knowledge that is very crucial to your Home Assistant experience, which is not really emphasised enough in the Home Assistant documentation. In fact back when I was getting into the Home Assistant both the main documentation and basically all the guides around were just coming off the hype of Docker disrupting everything and that is a big reason why everyone suggested to install and use Home Assistant as a Docker container on top of any kind of stable OS. In fact I used to run it for years on my TerraMaster NAS, just so that I don't have a separate home server running 24/7 at home and just have everything inside the very compact NAS case. So here is the thing you NEED to know - Home Assistant Container is DEMO version of Home Assistant! If you want to have a full Home Assistant experience and use the knowledge of the huge community around the HA space, you have to use the Home Assistant OS. Ideally on dedicated hardware. Ideally on HA Green box, but any tiny PC would also work great. Raspberry Pi 4+ is common, but quite weak as the network size grows and especially the SD card for storage gets old very fast. Get a real small x86 PC with at least 4Gb RAM and a NVME SSD (eMMC is fine too). You want to have an Ethernet port and a few free USB ports. I would also suggest immediately getting HA SkyConnect adapter that can do Zigbee networking and will do Matter soon (tm). I am making do with a SonOff Zigbee gateway, but it is quite hacky to get working and your whole Zigbee communication breaks down if the WiFi goes down - suboptimal. So I took a backup of the Home Assistant instance using it's build-in tools. I took an export of my fully configured Firefly III instance and proceeded to wipe the drive of the NUC. That was not a smart idea. :D On the Home Assitant side I was really frustrated by the documentation that was really focused on users that are (likely) using Windows and are using an SD card in something like Raspberry Pi to get Home Assistant OS running. It recommended downloading Etcher to write the image to the boot medium. That is a really weird piece of software that managed to actually crash consistently when I was trying to run it from Debian Live or Ubuntu Live on my NUC. It took me way too long to give up and try something much simpler - dd. xzcat haos_generic-x86-64-11.0.img.xz dd of=/dev/mmcblk0 bs=1M That just worked, prefectly and really fast. If you want to use a GUI in a live environment, then just using the gnome-disk-utility ("Disks" in Gnome menu) and using the "Restore Disk Image ..." on a partition would work just as well. It even supports decompressing the XZ images directly while writing. But that image is small, will it not have a ton of unused disk space behind the fixed install partition? Yes, it will ... until first boot. The HA OS takes over the empty space after its install partition on the first boot-up and just grows its main partition to take up all the remaining space. Smart. After first boot is completed, the first boot wizard can be accessed via your web browser and one of the prominent buttons there is restoring from backup. So you just give it the backup file and wait. Sadly the restore does not actually give any kind of progress, so your only way to figure out when it is done is opening the same web adress in another browser tab and refresh periodically - after restoring from backup it just boots into the same config at it had before - all the settings, all the devices, all the history is preserved. Even authentification tokens are preserved so if yu had a Home Assitant Mobile installed on your phone (both for remote access and to send location info and phone state, like charging, to HA to trigger automations) then it will just suddenly start working again without further actions needed from your side. That is an almost perfect backup/restore experience. The first thing you get for using the OS version of HA is easy automatic update that also automatically takes a backup before upgrade, so if anything breaks you can roll back with one click. There is also a command-line tool that allows to upgrade, but also downgrade ha-core and other modules. I had to use it today as HA version 23.10.4 actually broke support for the Sonoff bridge that I am using to control Zigbee devices, which are like 90% of all smart devices in my home. Really helpful stuff, but not a must have. What is a must have and that you can (really) only get with Home Assistant Operating System are Addons. Some addons are just normal servers you can run alongside HA on the same HA OS server, like MariaDB or Plex or a file server. That is not the most important bit, but even there the software comes pre-configured to use in a home server configuration and has a very simple config UI to pre-configure key settings, like users, passwords and database accesses for MariaDB - you can litereally in a few clicks and few strings make serveral users each with its own access to its own database. Couple more clicks and the DB is running and will be kept restarted in case of failures. But the real gems in the Home Assistant Addon Store are modules that extend Home Assitant core functionality in way that would be really hard or near impossible to configure in Home Assitant Container manually, especially because no documentation has ever existed for such manual config - everyone just tells you to install the addon from HA Addon store or from HACS. Or you can read the addon metadata in various repos and figure out what containers it actually runs with what settings and configs and what hooks it puts into the HA Core to make them cooperate. And then do it all over again when a new version breaks everything 6 months later when you have already forgotten everything. In the Addons that show up immediately after installation are addons to install the new Matter server, a MariaDB and MQTT server (that other addons can use for data storage and message exchange), Z-Wave support and ESPHome integration and very handy File manager that includes editors to edit Home Assitant configs directly in brower and SSH/Terminal addon that boht allows SSH connection and also a web based terminal that gives access to the OS itself and also to a comand line interface, for example, to do package downgrades if needed or see detailed logs. And also there is where you can get the features that are the focus this year for HA developers - voice enablers. However that is only a beginning. Like in Debian you can add additional repositories to expand your list of available addons. Unlike Debian most of the amazing software that is available for Home Assistant is outside the main, official addon store. For now I have added the most popular addon repository - HACS (Home Assistant Community Store) and repository maintained by Alexbelgium. The first includes things like NodeRED (a workflow based automation programming UI), Tailscale/Wirescale for VPN servers, motionEye for CCTV control, Plex for home streaming. HACS also includes a lot of HA UI enhacement modules, like themes, custom UI control panels like Mushroom or mini-graph-card and integrations that provide more advanced functions, but also require more knowledge to use, like Local Tuya - that is harder to set up, but allows fully local control of (normally) cloud-based devices. And it has AppDaemon - basically a Python based automation framework where you put in Python scrips that get run in a special environment where they get fed events from Home Assistant and can trigger back events that can control everything HA can and also do anything Python can do. This I will need to explore later. And the repository by Alex includes the thing that is actually the focus of this blog post (I know :D) - Firefly III addon and Firefly Importer addon that you can then add to your Home Assistant OS with a few clicks. It also has all kinds of addons for NAS management, photo/video server, book servers and Portainer that lets us setup and run any Docker container inside the HA OS structure. HA OS will detect this and warn you about unsupported processes running on your HA OS instance (nice security feature!), but you can just dismiss that. This will be very helpful very soon. This whole environment of OS and containers and apps really made me think - what was missing in Debian that made the talented developers behind all of that to spend the immense time and effor to setup a completely new OS and app infrastructure and develop a completel paraller developer community for Home Assistant apps, interfaces and configurations. Is there anything that can still be done to make HA community and the general open source and Debian community closer together? HA devs are not doing anything wrong: they are using the best open source can provide, they bring it to people whould could not install and use it otherwise, they are contributing fixes and improvements as well. But there must be some way to do this better, together. So I installed MariaDB, create a user and database for Firefly. I installed Firefly III and configured it to use the MariaDB with the web config UI. When I went into the Firefly III web UI I was confronted with the normal wizard to setup a new instance. And no reference to any backup restore. Hmm, ok. Maybe that goes via the Importer? So I make an access token again, configured the Importer to use that, configured the Nordlinger bank connection settings. Then I tried to import the export that I downloaded from Firefly III before. The importer did not auto-recognose the format. Turns out it is just a list of transactions ... It can only be barely useful if you first manually create all the asset accounts with the same names as before and even then you'll again have to deal with resolving the problem of transfers showing up twice. And all of your categories (that have not been used yet) are gone, your automation rules and bills are gone, your budgets and piggy banks are gone. Boooo. It will be easier for me to recreate my account data from bank exports again than to resolve data in that transaction export. Turns out that Firefly III documenation explicitly recommends making a mysqldump of your own and not rely on anything in the app itself for backup purposes. Kind of sad this was not mentioned in the export page that sure looked a lot like a backup :D After doing all that work all over again I needed to make something new not to feel like I wasted days of work for no real gain. So I started solving a problem I had for a while already - how do I add cash transations to the system when I am out of the house with just my phone in the hand? So far my workaround has been just sending myself messages in WhatsApp with the amount and description of any cash expenses. Two solutions are possible: app and bot. There are actually multiple Android-based phone apps that work with Firefly III API to do full financial management from the phone. However, after trying it out, that is not what I will be using most of the time. First of all this requires your Firefly III instance to be accessible from the Internet. Either via direct API access using some port forwarding and secured with HTTPS and good access tokens, or via a VPN server redirect that is installed on both HA and your phone. Tailscale was really easy to get working. But the power has its drawbacks - adding a new cash transaction requires opening the app, choosing new transaction view, entering descriptio, amount, choosing "Cash" as source account and optionally choosing destination expense account, choosing category and budget and then submitting the form to the server. Sadly none of that really works if you have no Internet or bad Internet at the place where you are using cash. And it's just too many steps. Annoying. An easier alternative is setting up a Telegram bot - it is running in a custom Docker container right next to your Firefly (via Portainer) and you talk to it via a custom Telegram chat channel that you create very easily and quickly. And then you can just tell it "Coffee 5" and it will create a transaction from the (default) cash account in 5 amount with description "Coffee". This part also works if you are offline at the moment - the bot will receive the message once you get back online. You can use Telegram bot menu system to edit the transaction to add categories or expense accounts, but this part only work if you are online. And the Firefly instance does not have to be online at all. Really nifty. So next week I will need to write up all the regular payments as bills in Firefly (again) and then I can start writing a Python script to predict my (financial) future!

Ian Jackson: DigiSpark (ATTiny85) - Arduino, C, Rust, build systems

Recently I completed a small project, including an embedded microcontroller. For me, using the popular Arduino IDE, and C, was a mistake. The experience with Rust was better, but still very exciting, and not in a good way. Here follows the rant. Introduction In a recent project (I ll write about the purpose, and the hardware in another post) I chose to use a DigiSpark board. This is a small board with a USB-A tongue (but not a proper plug), and an ATTiny85 microcontroller, This chip has 8 pins and is quite small really, but it was plenty for my application. By choosing something popular, I hoped for convenient hardware, and an uncomplicated experience. Convenient hardware, I got. Arduino IDE The usual way to program these boards is via an IDE. I thought I d go with the flow and try that. I knew these were closely related to actual Arduinos and saw that the IDE package arduino was in Debian. But it turns out that the Debian package s version doesn t support the DigiSpark. (AFAICT from the list it offered me, I m not sure it supports any ATTiny85 board.) Also, disturbingly, its board manager seemed to be offering to install board support, suggesting it would download stuff from the internet and run it. That wouldn t be acceptable for my main laptop. I didn t expect to be doing much programming or debugging, and the project didn t have significant security requirements: the chip, in my circuit, has only a very narrow ability do anything to the real world, and no network connection of any kind. So I thought it would be tolerable to do the project on my low-security video laptop . That s the machine where I m prepared to say yes to installing random software off the internet. So I went to the upstream Arduino site and downloaded a tarball containing the Arduino IDE. After unpacking that in /opt it ran and produced a pointy-clicky IDE, as expected. I had already found a 3rd-party tutorial saying I needed to add a magic URL (from the DigiSpark s vendor) in the preferences. That indeed allowed it to download a whole pile of stuff. Compilers, bootloader clients, god knows what. However, my tiny test program didn t make it to the board. Half-buried in a too-small window was an error message about the board s bootloader ( Micronucleus ) being too new. The boards I had came pre-flashed with micronucleus 2.2. Which is hardly new, But even so the official Arduino IDE (or maybe the DigiSpark s board package?) still contains an old version. So now we have all the downsides of curl bash-ware, but we re lacking the it s up to date and it just works upsides. Further digging found some random forum posts which suggested simply downloading a newer micronucleus and manually stuffing it into the right place: one overwrites a specific file, in the middle the heaps of stuff that the Arduino IDE s board support downloader squirrels away in your home directory. (In my case, the home directory of the untrusted shared user on the video laptop,) So, whatever . I did that. And it worked! Having demo d my ability to run code on the board, I set about writing my program. Writing C again The programming language offered via the Arduino IDE is C. It s been a little while since I started a new thing in C. After having spent so much of the last several years writing Rust. C s primitiveness quickly started to grate, and the program couldn t easily be as DRY as I wanted (Don t Repeat Yourself, see Wilson et al, 2012, 4, p.6). But, I carried on; after all, this was going to be quite a small job. Soon enough I had a program that looked right and compiled. Before testing it in circuit, I wanted to do some QA. So I wrote a simulator harness that #included my Arduino source file, and provided imitations of the few Arduino library calls my program used. As an side advantage, I could build and run the simulation on my main machine, in my normal development environment (Emacs, make, etc.). The simulator runs confirmed the correct behaviour. (Perhaps there would have been some more faithful simulation tool, but the Arduino IDE didn t seem to offer it, and I wasn t inclined to go further down that kind of path.) So I got the video laptop out, and used the Arduino IDE to flash the program. It didn t run properly. It hung almost immediately. Some very ad-hoc debugging via led-blinking (like printf debugging, only much worse) convinced me that my problem was as follows: Arduino C has 16-bit ints. My test harness was on my 64-bit Linux machine. C was autoconverting things (when building for the micrcocontroller). The way the Arduino IDE ran the compiler didn t pass the warning options necessary to spot narrowing implicit conversions. Those warnings aren t the default in C in general because C compilers hate us all for compatibility reasons. I don t know why those warnings are not the default in the Arduino IDE, but my guess is that they didn t want to bother poor novice programmers with messages from the compiler explaining how their program is quite possibly wrong. After all, users don t like error messages so we shouldn t report errors. And novice programmers are especially fazed by error messages so it s better to just let them struggle themselves with the arcane mysteries of undefined behaviour in C? The Arduino IDE does offer a dropdown for compiler warnings . The default is None. Setting it to All didn t produce anything about my integer overflow bugs. And, the output was very hard to find anyway because the log window has a constant stream of strange messages from javax.jmdns, with hex DNS packet dumps. WTF. Other things that were vexing about the Arduino IDE: it has fairly fixed notions (which don t seem to be documented) about how your files and directories ought to be laid out, and magical machinery for finding things you put nearby its sketch (as it calls them) and sticking them in its ear, causing lossage. It has a tendency to become confused if you edit files under its feet (e.g. with git checkout). It wasn t really very suited to a workflow where principal development occurs elsewhere. And, important settings such as the project s clock speed, or even the target board, or the compiler warning settings to use weren t stored in the project directory along with the actual code. I didn t look too hard, but I presume they must be in a dotfile somewhere. This is madness. Apparently there is an Arduino CLI too. But I was already quite exasperated, and I didn t like the idea of going so far off the beaten path, when the whole point of using all this was to stay with popular tooling and share fate with others. (How do these others cope? I have no idea.) As for the integer overflow bug: I didn t seriously consider trying to figure out how to control in detail the C compiler options passed by the Arduino IDE. (Perhaps this is possible, but not really documented?) I did consider trying to run a cross-compiler myself from the command line, with appropriate warning options, but that would have involved providing (or stubbing, again) the Arduino/DigiSpark libraries (and bugs could easily lurk at that interface). Instead, I thought, if only I had written the thing in Rust . But that wasn t possible, was it? Does Rust even support this board? Rust on the DigiSpark I did a cursory web search and found a very useful blog post by Dylan Garrett. This encouraged me to think it might be a workable strategy. I looked at the instructions there. It seemed like I could run them via the privsep arrangement I use to protect myself when developing using upstream cargo packages from crates.io. I got surprisingly far surprisingly quickly. It did, rather startlingly, cause my rustup to download a random recent Nightly Rust, but I have six of those already for other Reasons. Very quickly I got the trinket LED blink example, referenced by Dylan s blog post, to compile. Manually copying the file to the video laptop allowed me to run the previously-downloaded micronucleus executable and successfully run the blink example on my board! I thought a more principled approach to the bootloader client might allow a more convenient workflow. I found the upstream Micronucleus git releases and tags, and had a look over its source code, release dates, etc. It seemed plausible, so I compiled v2.6 from source. That was a success: now I could build and install a Rust program onto my board, from the command line, on my main machine. No more pratting about with the video laptop. I had got further, more quickly, with Rust, than with the Arduino IDE, and the outcome and workflow was superior. So, basking in my success, I copied the directory containing the example into my own project, renamed it, and adjusted the path references. That didn t work. Now it didn t build. Even after I copied about .cargo/config.toml and rust-toolchain.toml it didn t build, producing a variety of exciting messages, depending what precisely I tried. I don t have detailed logs of my flailing: the instructions say to build it by cd ing to the subdirectory, and, given that what I was trying to do was to not follow those instructions, it didn t seem sensible to try to prepare a proper repro so I could file a ticket. I wasn t optimistic about investigating it more deeply myself: I have some experience of fighting cargo, and it s not usually fun. Looking at some of the build control files, things seemed quite complicated. Additionally, not all of the crates are on crates.io. I have no idea why not. So, I would need to supply local copies of them anyway. I decided to just git subtree add the avr-hal git tree. (That seemed better than the approach taken by the avr-hal project s cargo template, since that template involve a cargo dependency on a foreign git repository. Perhaps it would be possible to turn them into path dependencies, but given that I had evidence of file-location-sensitive behaviour, which I didn t feel like I wanted to spend time investigating, using that seems like it would possibly have invited more trouble. Also, I don t like package templates very much. They re a form of clone-and-hack: you end up stuck with whatever bugs or oddities exist in the version of the template which was current when you started.) Since I couldn t get things to build outside avr-hal, I edited the example, within avr-hal, to refer to my (one) program.rs file outside avr-hal, with a #[path] instruction. That s not pretty, but it worked. I also had to write a nasty shell script to work around the lack of good support in my nailing-cargo privsep tool for builds where cargo must be invoked in a deep subdirectory, and/or Cargo.lock isn t where it expects, and/or the target directory containing build products is in a weird place. It also has to filter the output from cargo to adjust the pathnames in the error messages. Otherwise, running both cd A; cargo build and cd B; cargo build from a Makefile produces confusing sets of error messages, some of which contain filenames relative to A and some relative to B, making it impossible for my Emacs to reliably find the right file. RIIR (Rewrite It In Rust) Having got my build tooling sorted out I could go back to my actual program. I translated the main program, and the simulator, from C to Rust, more or less line-by-line. I made the Rust version of the simulator produce the same output format as the C one. That let me check that the two programs had the same (simulated) behaviour. Which they did (after fixing a few glitches in the simulator log formatting). Emboldened, I flashed the Rust version of my program to the DigiSpark. It worked right away! RIIR had caused the bug to vanish. Of course, to rewrite the program in Rust, and get it to compile, it was necessary to be careful about the types of all the various integers, so that s not so surprising. Indeed, it was the point. I was then able to refactor the program to be a bit more natural and DRY, and improve some internal interfaces. Rust s greater power, compared to C, made those cleanups easier, so making them worthwhile. However, when doing real-world testing I found a weird problem: my timings were off. Measured, the real program was too fast by a factor of slightly more than 2. A bit of searching (and searching my memory) revealed the cause: I was using a board template for an Adafruit Trinket. The Trinket has a clock speed of 8MHz. But the DigiSpark runs at 16.5MHz. (This is discussed in a ticket against one of the C/C++ libraries supporting the ATTiny85 chip.) The Arduino IDE had offered me a choice of clock speeds. I have no idea how that dropdown menu took effect; I suspect it was adding prelude code to adjust the clock prescaler. But my attempts to mess with the CPU clock prescaler register by hand at the start of my Rust program didn t bear fruit. So instead, I adopted a bodge: since my code has (for code structure reasons, amongst others) only one place where it dealt with the underlying hardware s notion of time, I simply changed my delay function to adjust the passed-in delay values, compensating for the wrong clock speed. There was probably a more principled way. For example I could have (re)based my work on either of the two unmerged open MRs which added proper support for the DigiSpark board, rather than abusing the Adafruit Trinket definition. But, having a nearly-working setup, and an explanation for the behaviour, I preferred the narrower fix to reopening any cans of worms. An offer of help As will be obvious from this posting, I m not an expert in dev tools for embedded systems. Far from it. This area seems like quite a deep swamp, and I m probably not the person to help drain it. (Frankly, much of the improvement work ought to be done, and paid for, by hardware vendors.) But, as a full Member of the Debian Project, I have considerable gatekeeping authority there. I also have much experience of software packaging, build systems, and release management. If anyone wants to try to improve the situation with embedded tooling in Debian, and is willing to do the actual packaging work. I would be happy to advise, and to review and sponsor your contributions. An obvious candidate: it seems to me that micronucleus could easily be in Debian. Possibly a DigiSpark board definition could be provided to go with the arduino package. Unfortunately, IMO Debian s Rust packaging tooling and workflows are very poor, and the first of my suggestions for improvement wasn t well received. So if you need help with improving Rust packages in Debian, please talk to the Debian Rust Team yourself. Conclusions Embedded programming is still rather a mess and probably always will be. Embedded build systems can be bizarre. Documentation is scant. You re often expected to download board support packages full of mystery binaries, from the board vendor (or others). Dev tooling is maddening, especially if aimed at novice programmers. You want version control? Hermetic tracking of your project s build and install configuration? Actually to be told by the compiler when you write obvious bugs? You re way off the beaten track. As ever, Free Software is under-resourced and the maintainers are often busy, or (reasonably) have other things to do with their lives. All is not lost Rust can be a significantly better bet than C for embedded software: The Rust compiler will catch a good proportion of programming errors, and an experienced Rust programmer can arrange (by suitable internal architecture) to catch nearly all of them. When writing for a chip in the middle of some circuit, where debugging involves staring an LED or a multimeter, that s precisely what you want. Rust embedded dev tooling was, in this case, considerably better. Still quite chaotic and strange, and less mature, perhaps. But: significantly fewer mystery downloads, and significantly less crazy deviations from the language s normal build system. Overall, less bad software supply chain integrity. The ATTiny85 chip, and the DigiSpark board, served my hardware needs very well. (More about the hardware aspects of this project in a future posting.)

comment count unavailable comments

Jamie McClelland: Users without passwords

About fifteen years ago, while debugging a database probem, I was horrified to discover that we had two root users - one with the password I had been using and one without a password. Nooo! So, I wrote a simple maintenance script that searched for and deleted any user in our database without a password. I even made it part of our puppet recipe - since the database server was in use by users and I didn t want anyone using SQL statements to change their password to an empty value. Then I forgot about it. Recently, I upgraded our MariaDB databases to Debian bullseye, which inserted the mariadb.sys user which . doesn t have a password set. It seems to be locked down in other ways, but my dumb script didn t know about that and happily deleted the user. Who needs that mariadb.sys user anyway? Apparently we all do. On one server, I can t login as root anymore. On another server I can login as root, but if I try to list users I get an error:
ERROR 1449 (HY000): The user specified as a definer ( mariadb.sys @ localhost ) does not exist
The Internt is full of useless advice. The most common is to simply insert that user. Except
MariaDB [mysql]> CREATE USER  mariadb.sys @ localhost  ACCOUNT LOCK PASSWORD EXPIRE;
ERROR 1396 (HY000): Operation CREATE USER failed for 'mariadb.sys'@'localhost'
MariaDB [mysql]> 
Yeah, that s not going to work. It seems like we are dealing with two changes. One, the old mysql.user table was replaced by the global_priv table and then turned into a view for backwards compatibility. And two, for sensible reasons the default definer for this view has been changed from the root user to a user that, ahem, is unlikely to be changed or deleted. Apparently I can t add the mariadb.sys user because it would alter the user view which has a definer that doesn t exist. Although not sure if this really is the reason? Fortunately, I found an excellent suggestion for changing the definer of a view. My modified version of the answer is, run the following command which will generate a SQL statement:
SELECT CONCAT("ALTER DEFINER=root@localhost VIEW ", table_name, " AS ", view_definition, ";") FROM information_schema.views WHERE table_schema='mysql' AND definer = 'mariadb.sys@localhost';
Then, execute the statement. And then also update the mysql.proc table:
UPDATE mysql.proc SET definer = 'root@localhost' WHERE definer = 'mariadb.sys@localhost';
And lastly, I had to run:
DELETE FROM tables_priv WHERE User = 'mariadb.sys';
FLUSH privileges;
Wait, was the tables_priv entry the whole problem all along? Not sure. But now I can run:
CREATE USER  mariadb.sys @ localhost  ACCOUNT LOCK PASSWORD EXPIRE;
GRANT SELECT, DELETE ON  mysql . global_priv  TO  mariadb.sys @ localhost ;
And reverse the other statements:
SELECT CONCAT("ALTER DEFINER= mariadb.sys @localhost VIEW ", table_name, " AS ", view_definition, ";") FROM information_schema.views WHERE table_schema='mysql' AND definer = 'root@localhost';
[Execute the output.]
UPDATE mysql.proc SET definer = 'mariadb.sys@localhost' WHERE definer = 'root@localhost';
And while we re on the topic of borked MariaDB authentication, here are the steps to change the root password and restore all root privielges if you can t get in at all or your root user is missing the GRANT OPTION (you can change ALTER to CREATE if the root user does not even exist):
systemctl stop mariadb
mariadbd-safe --skip-grant-tables --skip-networking &
mysql -u root
[mysql]> FLUSH PRIVILEGES
[mysql]> ALTER USER  root @ localhost  IDENTIFIED VIA mysql_native_password USING PASSWORD('your-secret-password') OR unix_socket; 
[mysql]> GRANT ALL PRIVILEGES ON *.* to 'root'@'localhost' WITH GRANT OPTION;
mariadbd-admin shutdown
systemctl start mariadb

Russell Coker: Brother MFC-J4440DW Printer

I just had to setup a Brother MFC-J4440DW for a relative. They were replacing an old HP laser printer that mysteriously stopped printing as dark as it should, I don t know whether the HP printer had worn out or if the HP firmware decided to hobble it to make them buy a new printer. In either case HP is well known for shady behaviour with their printer firmware and should be avoided. The new Brother printer has problems when using wifi and auto DNS. I don t know how much of that was due to the printer itself and how much was due to the wifi AP provided by Foxtel. Anyway it works better with Ethernet and a fixed address (the wifi AP didn t allow me to set a fixed address). I think the main thing was configuring CUPS to connect via the IP address and not use Avahi etc. One problem I had with printing was that programs like Chrome and LibreOffice would hang for about a minute before printing, that turned out to be due to /etc/cups/lpoptions having the old printer (which had been removed) listed as the default. It would be nice if the web configuration for cups would change that when I set the default printer. CUPS doesn t seem to support USB printing. If it is possible to get this printer to print via USB then I welcome a comment describing how to do it. Scanning only seems to work on Ethernet not on USB, the command for scanning that I ended up with was scanimage -d escl:http://10.0.0.3:80 . Again I welcome comments from anyone who has had success in scanning via USB. There are probably some Linux users who would find it really inconvenient to setup a network interface specifically for printing. It s easy for me as I have a pile of spare ethernet cards and a box of cables but some people would have to buy this. Also it s disappointing that Brother didn t include an Ethernet cable or a USB cable in the box. But if that makes it cheaper I can deal with that. The resolution for scanning is only 832*1163 and it s black and white, I think that generally scanning in printers is a bad idea, taking a photo with a phone is a better way of scanning documents. Generally this printer works well and is cheap at only $299, a price for disposable hardware by today s standards. There are Debian packages from Brother for the printer. The scanner package looks like it just configures scanimage, and I m not sure whether the stock version of CUPS in Debian will do it without the Brother package. One thing I found interesting is that the package mfcj4440dwpdrv has the following shell code in the postinst to label for SE Linux:
if [ "$(which semanage 2> /dev/null)" != '' ];then
semanage fcontext -a -t cupsd_rw_etc_t '/opt/brother/Printers/mfcj4440dw/inf(/.*)?'
semanage fcontext -a -t bin_t          '/opt/brother/Printers/mfcj4440dw/lpd(/.*)?'
semanage fcontext -a -t bin_t          '/opt/brother/Printers/mfcj4440dw/cupswrapper(/.*)?'
if [ "$(which restorecon 2> /dev/null)" != '' ];then
restorecon -R /opt/brother/Printers/mfcj4440dw
fi
fi
This is the first time I ve seen a Debian package from a hardware vendor with SE Linux specific code. I can t just add those rules to the Debian policy as that would make the semanage commands fail to add an identical context spec will break the postinst. In the latest policy I m uploading to Debian/Unstable (version 2.20231010-1) there are the following 3 lines to deal with this, the first was already there for some time and the other 2 I just added:
/opt/brother/Printers/([^/]+/)?inf(/.*)?        gen_context(system_u:object_r:cupsd_rw_etc_t,s0)
/opt/brother/Printers/[^/]+/lpd(/.*)?   gen_context(system_u:object_r:bin_t,s0)
/opt/brother/Printers/[^/]+/cupswrapper(/.*)?   gen_context(system_u:object_r:bin_t,s0)
The Brother employee(s) who added the SE Linux code to their package are welcome to connect to me on LinkedIn.

20 October 2023

Freexian Collaborators: Debian Contributions: Freexian meetup, debusine updates, lpr/lpd in Debian, and more! (by Utkarsh Gupta, Stefano Rivera)

Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Freexian Meetup, by Stefano Rivera, Utkarsh Gupta, et al. During DebConf, Freexian organized a meetup for its collaborators and those interested in learning more about Freexian and its services. It was well received and many people interested in Freexian showed up. Some developers who were interested in contributing to LTS came to get more details about joining the project. And some prospective customers came to get to know us and ask questions. Sadly, the tragic loss of Abraham shook DebConf, both individually and structurally. The meetup got rescheduled to a small room without video coverage. With that, we still had a wholesome interaction and here s a quick picture from the meetup taken by Utkarsh (which is also why he s missing!).

Debusine, by Rapha l Hertzog, et al. Freexian has been investing into debusine for a while, but development speed is about to increase dramatically thanks to funding from SovereignTechFund.de. Rapha l laid out the 5 milestones of the funding contract, and filed the issues for the first milestone. Together with Enrico and Stefano, they established a workflow for the expanded team. Among the first steps of this milestone, Enrico started to work on a developer-friendly description of debusine that we can use when we reach out to the many Debian contributors that we will have to interact with. And Rapha l started the design work of the autopkgtest and lintian tasks, i.e. what s the interface to schedule such tasks, what behavior and what associated options do we support? At this point you might wonder what debusine is supposed to be let us try to answer this: Debusine manages scheduling and distribution of Debian-related build and QA tasks to a network of worker machines. It also manages the resulting artifacts and provides the results in an easy to consume way. We want to make it easy for Debian contributors to leverage all the great QA tools that Debian provides. We want to build the next generation of Debian s build infrastructure, one that will continue to reliably do what it already does, but that will also enable distribution-wide experiments, custom package repositories and custom workflows with advanced package reviews. If this all sounds interesting to you, don t hesitate to watch the project on salsa.debian.org and to contribute.

lpr/lpd in Debian, by Thorsten Alteholz During Debconf23, Till Kamppeter presented CPDB (Common Print Dialog Backend), a new way to handle print queues. After this talk it was discussed whether the old lpr/lpd based printing system could be abandoned in Debian or whether there is still demand for it. So Thorsten asked on the debian-devel email list whether anybody uses it. Oddly enough, these old packages are still useful:
  • Within a small network it is easier to distribute a printcap file, than to properly configure cups clients.
  • One of the biggest manufacturers of WLAN router and DSL boxes only supports raw queues when attaching an USB printer to their hardware. Admittedly the CPDB still has problems with such raw queues.
  • The Pharos printing system at MIT is still lpd-based.
As a result, the lpr/lpd stuff is not yet ready to be abandoned and Thorsten will adopt the relevant packages (or rather move them under the umbrella of the debian-printing team). Though it is not planned to develop new features, those packages should at least have a maintainer. This month Thorsten adopted rlpr, an utility for lpd printing without using /etc/printcap. The next one he is working on is lprng, a lpr/lpd printer spooling system. If you know of any other package that is also needed and still maintained by the QA team, please tell Thorsten.

/usr-merge, by Helmut Grohne Discussion about lifting the file move moratorium has been initiated with the CTTE and the release team. A formal lift is dependent on updating debootstrap in older suites though. A significant number of packages can automatically move their systemd unit files if dh_installsystemd and systemd.pc change their installation targets. Unfortunately, doing so makes some packages FTBFS and therefore patches have been filed. The analysis tool, dumat, has been enhanced to better understand which upgrade scenarios are considered supported to reduce false positive bug filings and gained a mode for local operation on a .changes file meant for inclusion in salsa-ci. The filing of bugs from dumat is still manual to improve the quality of reports. Since September, the moratorium has been lifted.

Miscellaneous contributions
  • Rapha l updated Django s backport in bullseye-backports to match the latest security release that was published in bookworm. Tracker.debian.org is still using that backport.
  • Helmut Grohne sent 13 patches for cross build failures.
  • Helmut Grohne performed a maintenance upload of debvm enabling its use in autopkgtests.
  • Helmut Grohne wrote an API-compatible reimplementation of autopkgtest-build-qemu. It is powered by mmdebstrap, therefore unprivileged, EFI-only and will soon be included in mmdebstrap.
  • Santiago continued the work regarding how to make it easier to (automatically) test reverse dependencies. An example of the ongoing work was presented during the Salsa CI BoF at DebConf 23.
    In fact, omniorb-dfsg test pipelines as the above were used for the omniorb-dfsg 4.3.0 transition, verifying how the reverse dependencies (tango, pytango and omnievents) were built and how their autopkgtest jobs run with the to-be-uploaded omniorb-dfsg new release.
  • Utkarsh and Stefano attended and helped run DebConf 23. Also continued winding up DebConf 22 accounting.
  • Anton Gladky did some science team uploads to fix RC bugs.

Next.

Previous.